The accelerating militarization of artificial intelligence has transformed the ethics, politics, and governance of warfare. This article interrogates how AI-driven targeting systems function as epistemic infrastructures that classify, legitimize, and execute violence, using Israel's conduct in Gaza as a paradigmatic case. Through the lens of responsibility, the article examines three interrelated dimensions: (a) political responsibility, exploring how states exploit AI to accelerate warfare while evading accountability; (b) professional responsibility, addressing the complicity of technologists, engineers, and defense contractors in the weaponization of data; and (c) personal responsibility, probing the moral agency of individuals who participate in or resist algorithmic governance. This is complemented by an examination of the position and influence of those participating in public discourse, whose narratives often obscure or normalize AI-enabled violence. The Gaza case reveals AI not as a neutral instrument but as an active participant in the reproduction of colonial hierarchies and the normalization of atrocity. Ultimately, the paper calls for a reframing of technological agency and accountability in the age of automated warfare. It concludes that confronting algorithmic violence demands a democratization of AI ethics, one that resists technocratic fatalism and centers the lived realities of those most affected by high-tech militarism.
翻译:人工智能的加速军事化已经改变了战争的伦理、政治与治理方式。本文以以色列在加沙的行为作为典型案例,探讨了人工智能驱动的目标识别系统如何作为认知基础设施,对暴力进行分类、合法化并执行。通过责任视角,本文审视了三个相互关联的维度:(a) 政治责任,探究国家如何利用人工智能加速战争进程同时逃避问责;(b) 职业责任,探讨技术专家、工程师和国防承包商在数据武器化过程中的共谋问题;(c) 个人责任,剖析参与或抵抗算法治理的个体的道德能动性。此外,本文还考察了参与公共话语者的立场与影响,他们的叙事往往掩盖或常态化人工智能助长的暴力。加沙案例揭示出,人工智能并非中立工具,而是殖民等级结构再生产与暴行常态化的积极参与者。最终,本文呼吁在自动化战争时代重新构建技术能动性与问责机制。结论指出,对抗算法暴力需要实现人工智能伦理的民主化,这种民主化必须抵制技术官僚式的宿命论,并以受高科技军国主义影响最深群体的生活现实为核心。