As artificial intelligence (AI) systems become increasingly deployed across the world, they are also increasingly implicated in AI incidents - harm events to individuals and society. As a result, industry, civil society, and governments worldwide are developing best practices and regulations for monitoring and analyzing AI incidents. The AI Incident Database (AIID) is a project that catalogs AI incidents and supports further research by providing a platform to classify incidents for different operational and research-oriented goals. This study reviews the AIID's dataset of 750+ AI incidents and two independent taxonomies applied to these incidents to identify common challenges to indexing and analyzing AI incidents. We find that certain patterns of AI incidents present structural ambiguities that challenge incident databasing and explore how epistemic uncertainty in AI incident reporting is unavoidable. We therefore report mitigations to make incident processes more robust to uncertainty related to cause, extent of harm, severity, or technical details of implicated systems. With these findings, we discuss how to develop future AI incident reporting practices.
翻译:随着人工智能系统在全球范围内的部署日益广泛,它们也越来越多地牵涉到人工智能事件——即对个人和社会造成伤害的事件。因此,全球的产业界、民间社会以及政府机构正在制定用于监控和分析人工智能事件的最佳实践与监管法规。人工智能事件数据库(AIID)是一个对人工智能事件进行分类编目并支持进一步研究的项目,它提供了一个平台,可根据不同的运营和研究目标对事件进行分类。本研究回顾了AIID收录的750多起人工智能事件数据集,以及应用于这些事件的两个独立分类体系,旨在识别索引和分析人工智能事件时面临的常见挑战。我们发现,某些类型的人工智能事件呈现出结构上的模糊性,这对事件数据库的构建构成了挑战,并探讨了人工智能事件报告中的认知不确定性为何不可避免。因此,我们提出了缓解措施,以使事件处理流程在面对涉及原因、危害程度、严重性或相关系统技术细节等方面的不确定性时更具鲁棒性。基于这些发现,我们进一步探讨了如何发展未来的人工智能事件报告实践。