AI has made significant strides recently, leading to various applications in both civilian and military sectors. The military sees AI as a solution for developing more effective and faster technologies. While AI offers benefits like improved operational efficiency and precision targeting, it also raises serious ethical and legal concerns, particularly regarding human rights violations. Autonomous weapons that make decisions without human input can threaten the right to life and violate international humanitarian law. To address these issues, we propose a three-stage framework (Design, In Deployment, and During/After Use) for evaluating human rights concerns in the design, deployment, and use of military AI. Each phase includes multiple components that address various concerns specific to that phase, ranging from bias and regulatory issues to violations of International Humanitarian Law. By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights.
翻译:人工智能近年来取得了显著进展,在民用和军事领域均产生了多种应用。军事领域将人工智能视为开发更高效、更快速技术的解决方案。尽管人工智能带来了作战效率提升和精准目标打击等优势,但也引发了严重的伦理与法律关切,特别是在侵犯人权方面。能够在无人干预下自主决策的武器系统可能威胁生命权并违反国际人道法。为解决这些问题,我们提出了一个三阶段框架(设计阶段、部署阶段、使用中/后阶段),用于评估军事人工智能在设计、部署及使用过程中涉及的人权问题。每个阶段包含多个组件,分别处理该阶段特有的各类问题,涵盖从算法偏见与监管缺失到违反国际人道法等不同层面。通过该框架,我们旨在平衡人工智能在军事行动中的优势与保护人权的必要性。