As consensus across the various published AI ethics principles is approached, a gap remains between high-level principles and practical techniques that can be readily adopted to design and develop responsible AI systems. We examine the practices and experiences of researchers and engineers from Australia's national scientific research agency (CSIRO), who are involved in designing and developing AI systems for many application areas. Semi-structured interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government. The principles comprise: (1) privacy protection and security, (2) reliability and safety, (3) transparency and explainability, (4) fairness, (5) contestability, (6) accountability, (7) human-centred values, (8) human, social and environmental wellbeing. Discussions on the gained insights from the interviews include various tensions and trade-offs between the principles, and provide suggestions for implementing each high-level principle. We also present suggestions aiming to enhance associated support mechanisms.
翻译:随着已发布的各类人工智能伦理原则逐渐趋近共识,高层原则与可直接用于设计和开发负责任人工智能系统的实用技术之间仍存在差距。本研究考察了澳大利亚国家科研机构(CSIRO)参与多个应用领域人工智能系统设计与开发的研究人员和工程师的实践与经验。通过半结构化访谈,我们探究了参与者的实践如何关联并契合澳大利亚政府提出的一套高层人工智能伦理原则。这些原则包括:(1)隐私保护与安全,(2)可靠性与安全性,(3)透明度与可解释性,(4)公平性,(5)可争议性,(6)问责制,(7)以人为本的价值观,(8)人类、社会与环境福祉。基于访谈所获洞见的讨论涵盖了各原则间的多重张力与权衡,并为实施每项高层原则提供了建议。我们还提出了旨在加强相关支持机制的建议。