Across academia, industry, and government, ``AI'' has become central in research and development, regulatory debates, and promises of ever faster and more capable decision-making and action. In numerous domains, especially safety-critical ones, there are significant concerns over how ``AI'' may affect decision-making, responsibility, or the likelihood of mistakes (to name only a few categories of critique). However, for most critiques, the target is generally ``AI'', a broad term admitting many (types of) systems used for a variety of tasks and each coming with its own set of limitations, challenges, and potential use cases. In this article, we focus on the military domain as a case study and present both a loose enumerative taxonomy of systems captured under the umbrella term ``military AI'', as well as discussion of the challenges of each. In doing so, we highlight that critiques of one (type of) system will not always transfer to other (types of) systems. Building on this, we argue that in order for debates to move forward fruitfully, it is imperative that the discussions be made more precise and that ``AI'' be excised from debates to the extent possible. Researchers, developers, and policy-makers should make clear exactly what systems they have in mind and what possible benefits and risks attend the deployment of those particular systems. While we focus on AI in the military as an exemplar for the overall trends in discussions of ``AI'', the argument's conclusions are broad and have import for discussions of AI across a host of domains.
翻译:在学术界、工业界和政府中,“AI”已成为研究与开发、监管辩论以及承诺实现更快、更智能的决策与行动的核心议题。在众多领域,尤其是安全关键领域,人们普遍担忧“AI”如何影响决策、责任归属或错误发生概率(仅列举部分批评范畴)。然而,大多数批评的对象往往是宽泛的“AI”概念,这一术语涵盖多种(类型的)系统,各自应用于不同任务,且具有独特的局限性、挑战和潜在应用场景。本文以军事领域为案例研究,提出一个松散的枚举式分类体系,涵盖“军事AI”这一统称下的各类系统,并探讨每类系统面临的挑战。通过此分析,我们强调针对某一(类)系统的批评未必适用于其他(类)系统。在此基础上,我们认为要使辩论富有成效地推进,必须提高讨论的精确性,并尽可能在辩论中剔除“AI”这一笼统表述。研究者、开发者和政策制定者应明确说明其具体指涉的系统类型,以及部署这些特定系统可能带来的效益与风险。尽管本文以军事AI为例阐释“AI”讨论中的整体趋势,但研究结论具有广泛适用性,对跨领域AI讨论均具有重要参考价值。