The recent embrace of machine learning (ML) in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research. This topic has received comparatively little attention of late compared to risks stemming from superintelligent artificial general intelligence (AGI), but requires fewer assumptions about the course of technological development and is thus a nearer-future issue. ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war. In the case of peer adversaries, this increases the likelihood of "low intensity" conflicts which risk escalation to broader warfare. In the case of non-peer adversaries, it reduces the domestic blowback to wars of aggression. This effect can occur regardless of other ethical issues around the use of military AI such as the risk of civilian casualties, and does not require any superhuman AI capabilities. Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research. Our goal in this paper is to raise awareness among the public and ML researchers on the near-future risks posed by full or near-full autonomy in military technology, and we provide regulatory suggestions to mitigate these risks. We call upon AI policy experts and the defense AI community in particular to embrace transparency and caution in their development and deployment of AWS to avoid the negative effects on global stability and AI research that we highlight here.
翻译:近年来,机器学习在自主武器系统开发中的应用为地缘政治稳定及人工智能研究的自由思想交流带来了严重风险。相较于超级智能通用人工智能所引发的风险,这一主题近期受到的关注相对较少,但其对技术发展路径的假设更少,因此是更迫近未来的问题。机器学习已能在许多战场角色中替代人类士兵,从而降低发动进攻性战争的前期人力成本及政治成本。当面对对等对手时,这会增加"低强度"冲突的可能性,并存在升级为全面战争的风险;而面对非对等对手时,则会减少侵略战争的国内反弹。这种影响与军事人工智能使用中的其他伦理问题(如平民伤亡风险)无关,且无需任何超人类AI能力。此外,自主武器系统的军事价值可能引发AI驱动的军备竞赛,以及对AI研究施加不当的国家安全限制。本文旨在提高公众和机器学习研究者对军事技术完全或近乎完全自主化所带来近期风险的认识,并提出缓解这些风险的监管建议。我们特别呼吁AI政策专家和国防AI界在研发部署自主武器系统时秉持透明与审慎原则,以避免本文所强调的对全球稳定与AI研究的负面影响。