Across academia, industry, and government, ``AI'' has become central in research and development, regulatory debates, and promises of ever faster and more capable decision-making and action. In numerous domains, especially safety-critical ones, there are significant concerns over how ``AI'' may affect decision-making, responsibility, or the likelihood of mistakes (to name only a few categories of critique). However, for most critiques, the target is generally ``AI'', a broad term admitting many (types of) systems used for a variety of tasks and each coming with its own set of limitations, challenges, and potential use cases. In this article, we focus on the military domain as a case study and present both a loose enumerative taxonomy of systems captured under the umbrella term ``military AI'', as well as discussion of the challenges of each. In doing so, we highlight that critiques of one (type of) system will not always transfer to other (types of) systems. Building on this, we argue that in order for debates to move forward fruitfully, it is imperative that the discussions be made more precise and that ``AI'' be excised from debates to the extent possible. Researchers, developers, and policy-makers should make clear exactly what systems they have in mind and what possible benefits and risks attend the deployment of those particular systems. While we focus on AI in the military as an exemplar for the overall trends in discussions of ``AI'', the argument's conclusions are broad and have import for discussions of AI across a host of domains.
翻译:在学术界、工业界和政府领域,“人工智能”已成为研发、监管辩论以及追求更快速、更强大决策与行动承诺的核心议题。在众多领域,尤其是安全关键领域,人们普遍担忧“人工智能”如何影响决策过程、责任归属或错误发生概率(仅列举部分批评范畴)。然而,大多数批评所针对的对象往往是宽泛的“人工智能”概念,这一术语涵盖多种(类型的)系统,它们被用于不同任务,且各自存在独特的局限性、挑战和潜在应用场景。本文以军事领域为案例研究,提出一个涵盖“军事人工智能”伞形术语下的松散枚举式系统分类法,并探讨各类系统面临的挑战。通过此分析,我们强调针对某一(类型)系统的批评未必适用于其他(类型)系统。在此基础上,我们认为要使辩论富有成效地推进,必须提升讨论的精确性,并尽可能在辩论中剔除“人工智能”这一笼统表述。研究者、开发者和政策制定者应明确说明其具体指涉的系统类型,以及部署这些特定系统可能带来的效益与风险。虽然我们以军事人工智能为例来阐释“人工智能”讨论中的整体趋势,但本文论证的结论具有广泛适用性,对跨领域的人工智能讨论具有重要启示意义。