The overall rapid increase of artificial intelligence (AI) use is linked to various initiatives that propose AI 'for good'. However, there is a lack of transparency in the goals of such projects, as well as a missing evaluation of their actual impacts on society and the planet. We close this gap by proposing public interest and sustainability as a regulatory dual-concept, together creating the necessary framework for a just and sustainable development that can be operationalized and utilized for the assessment of AI systems. Based on this framework, and building on existing work in auditing, we introduce the Impact-AI-method, a qualitative audit method to evaluate concrete AI projects with respect to public interest and sustainability. The interview-based method captures a project's governance structure, its theory of change, AI model and data characteristics, and social, environmental, and economic impacts. We also propose a catalog of assessment criteria to rate the outcome of the audit as well as to create an accessible output that can be debated broadly by civil society. The Impact-AI-method, developed in a transdisciplinary research setting together with NGOs and a multi-stakeholder research council, is intended as a reusable blueprint that both informs public debate about AI 'for good' claims and supports the creation of transparency of AI systems that purport to contribute to a just and sustainable development.
翻译:人工智能(AI)应用的全面快速增长与各类倡导“向善”AI的倡议密切相关。然而,此类项目的目标缺乏透明度,且对其社会与地球实际影响的评估亦存在缺失。我们通过提出将公共利益与可持续性作为一个监管双重概念来填补这一空白,二者共同构建了一个可操作化并用于评估AI系统的、促进公正与可持续发展的必要框架。基于此框架,并借鉴现有审计工作,我们引入了Impact-AI方法——一种用于评估具体AI项目在公共利益与可持续性方面的定性审计方法。这一基于访谈的方法涵盖项目的治理结构、变革理论、AI模型与数据特征,以及社会、环境与经济影响。我们还提出了一套评估标准目录,用于评定审计结果并生成可供公民社会广泛讨论的易懂成果。Impact-AI方法在与非政府组织及多方利益相关者研究委员会合作进行的跨学科研究环境中开发,旨在作为一个可复用的蓝图,既为关于“向善”AI主张的公共讨论提供信息,也支持那些声称有助于公正与可持续发展的AI系统提高透明度。