General purpose AI, such as ChatGPT, seems to have lowered the barriers for the public to use AI and harness its power. However, the governance and development of AI still remain in the hands of a few, and the pace of development is accelerating without a comprehensive assessment of risks. As a first step towards democratic risk assessment and design of general purpose AI, we introduce PARTICIP-AI, a carefully designed framework for laypeople to speculate and assess AI use cases and their impacts. Our framework allows us to study more nuanced and detailed public opinions on AI through collecting use cases, surfacing diverse harms through risk assessment under alternate scenarios (i.e., developing and not developing a use case), and illuminating tensions over AI development through making a concluding choice on its development. To showcase the promise of our framework towards informing democratic AI development, we run a medium-scale study with inputs from 295 demographically diverse participants. Our analyses show that participants' responses emphasize applications for personal life and society, contrasting with most current AI development's business focus. We also surface diverse set of envisioned harms such as distrust in AI and institutions, complementary to those defined by experts. Furthermore, we found that perceived impact of not developing use cases significantly predicted participants' judgements of whether AI use cases should be developed, and highlighted lay users' concerns of techno-solutionism. We conclude with a discussion on how frameworks like PARTICIP-AI can further guide democratic AI development and governance.
翻译:通用人工智能(如ChatGPT)似乎降低了公众使用人工智能并利用其能力的门槛。然而,人工智能的治理与发展仍掌握在少数人手中,且其发展速度正在加快,而缺乏全面的风险评估。作为迈向通用人工智能民主化风险评估与设计的第一步,我们提出了PARTICIP-AI——一个精心设计的框架,旨在帮助非专业人士推测和评估人工智能应用场景及其影响。我们的框架通过以下方式使我们能够研究更细致、更具体的公众对人工智能的看法:收集应用场景;在替代情境(即开发或不开发某一应用场景)下通过风险评估揭示多样化的危害;以及通过对是否开发应用场景做出最终选择来阐明人工智能发展中的矛盾。为了展示我们的框架在推动民主化人工智能发展方面的潜力,我们开展了一项中等规模的研究,收集了295名人口统计学背景各异的参与者的意见。我们的分析表明,参与者的反馈强调了对个人生活和社会有益的应用,这与当前大多数以商业为重点的人工智能开发形成对比。我们还揭示了参与者设想的一系列多样化危害,例如对人工智能和机构的不信任,这些危害与专家定义的危害形成互补。此外,我们发现,不开发应用场景的感知影响显著预测了参与者对是否应开发人工智能应用场景的判断,并突显了非专业用户对技术解决主义的担忧。最后,我们讨论了像PARTICIP-AI这样的框架如何能进一步指导民主化的人工智能发展与治理。