This paper introduces a collaborative, human-centred taxonomy of AI, algorithmic and automation harms. We argue that existing taxonomies, while valuable, can be narrow, unclear, typically cater to practitioners and government, and often overlook the needs of the wider public. Drawing on existing taxonomies and a large repository of documented incidents, we propose a taxonomy that is clear and understandable to a broad set of audiences, as well as being flexible, extensible, and interoperable. Through iterative refinement with topic experts and crowdsourced annotation testing, we propose a taxonomy that can serve as a powerful tool for civil society organisations, educators, policymakers, product teams and the general public. By fostering a greater understanding of the real-world harms of AI and related technologies, we aim to increase understanding, empower NGOs and individuals to identify and report violations, inform policy discussions, and encourage responsible technology development and deployment.
翻译:本文提出了一种协作式、以人为本的人工智能、算法与自动化危害分类法。我们认为,现有分类法虽具价值,但往往存在视角局限、定义模糊、主要面向从业者与政府部门,且常忽视广大公众需求的问题。基于现有分类体系及大量已记录事件库,我们提出了一种清晰易懂、适用于广泛受众,同时具备灵活性、可扩展性与互操作性的分类框架。通过与领域专家进行迭代优化及众包标注测试,我们构建的分类法可作为民间社会组织、教育工作者、政策制定者、产品团队及公众的有力工具。通过深化对人工智能及相关技术现实危害的理解,我们旨在提升公众认知,赋能非政府组织与个人识别并举报侵权行为,为政策讨论提供依据,并促进负责任的技术开发与部署。