The recent embrace of machine learning (ML) in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research. This topic has received comparatively little attention of late compared to risks stemming from superintelligent artificial general intelligence (AGI), but requires fewer assumptions about the course of technological development and is thus a nearer-future issue. ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war. In the case of peer adversaries, this increases the likelihood of "low intensity" conflicts which risk escalation to broader warfare. In the case of non-peer adversaries, it reduces the domestic blowback to wars of aggression. This effect can occur regardless of other ethical issues around the use of military AI such as the risk of civilian casualties, and does not require any superhuman AI capabilities. Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research. Our goal in this paper is to raise awareness among the public and ML researchers on the near-future risks posed by full or near-full autonomy in military technology, and we provide regulatory suggestions to mitigate these risks. We call upon AI policy experts and the defense AI community in particular to embrace transparency and caution in their development and deployment of AWS to avoid the negative effects on global stability and AI research that we highlight here.
翻译:近期机器学习在自主武器系统开发中的广泛应用,对地缘政治稳定和人工智能研究的自由思想交流构成了严重风险。与源自超级智能人工通用智能的风险相比,该议题近期获得的关注相对较少,但其对技术发展路径的假设要求更低,因而属于更近期的现实问题。机器学习已促使自主武器系统在许多战场角色中替代人类士兵,降低了发动进攻性战争的前期人力成本及相应的政治成本。对于实力相当的对手,这增加了"低强度"冲突的可能性,而此类冲突可能升级为更广泛的战争。对于实力不对等的对手,则减少了侵略战争在国内引发的反弹效应。这种效应可能独立于军事人工智能应用的其他伦理问题(如平民伤亡风险)而发生,且无需任何超人类的人工智能能力。此外,自主武器系统的军事价值引发了人工智能军备竞赛的幽灵,并可能导致对人工智能研究实施误导性的国家安全限制。本文旨在提高公众和机器学习研究者对军事技术完全或接近完全自主化所带来的近期风险的认识,并提出缓解这些风险的监管建议。我们呼吁人工智能政策专家特别是国防人工智能领域,在自主武器系统的开发与部署中秉持透明与审慎原则,以避免我们在此强调的对全球稳定和人工智能研究的负面影响。