Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical challenges. Nevertheless, pragmatic problem-solving demands that things should be sorted so that their grouping will promote successful actions for some specific end. In this article, we review and compare previous attempts to classify AI systems for the purpose of implementing AI governance in practice. We find that attempts to classify AI systems found in previous literature use one of three mental model. The Switch, i.e., a binary approach according to which systems either are or are not considered AI systems depending on their characteristics. The Ladder, i.e., a risk-based approach that classifies systems according to the ethical risks they pose. And the Matrix, i.e., a multi-dimensional classification of systems that take various aspects into account, such as context, data input, and decision-model. Each of these models for classifying AI systems comes with its own set of strengths and weaknesses. By conceptualising different ways of classifying AI systems into simple mental models, we hope to provide organisations that design, deploy, or regulate AI systems with the conceptual tools needed to operationalise AI governance in practice.
翻译:设计和部署人工智能(AI)系统的组织日益承诺遵循高层次的伦理原则。然而,AI伦理在原则与实践之间仍存在差距。组织在尝试将AI伦理付诸实践时面临的一个主要障碍是缺乏明确的具体适用范围。换言之,AI伦理原则应适用于哪些系统和流程的问题仍未得到解答。当然,目前并不存在普遍接受的AI定义,且不同系统会引发不同的伦理挑战。尽管如此,务实的解决问题方法要求对事物进行分类,以便通过分组促进针对特定目标的成功行动。本文回顾并比较了先前为在实践中实施AI治理而对AI系统进行分类的尝试。我们发现,先前文献中提出的AI系统分类尝试使用了三种心智模型之一:其一为“开关”,即一种二元方法,根据系统特性判断其是否被视为AI系统;其二为“阶梯”,即一种基于风险的方法,根据系统引发的伦理风险对其进行分类;其三为“矩阵”,即一种多维分类方法,综合考虑情境、数据输入和决策模型等多方面因素。每种AI系统分类模型都有其独特的优势与局限。通过将不同的AI系统分类方式概念化为简单的心智模型,我们希望能为设计、部署或监管AI系统的组织提供在实践中实施AI治理所需的概念工具。