Dominant approaches, e.g. the EU's "Trustworthy AI framework", treat trust as a property that can be designed for, evaluated, and governed according to normative and technical criteria. They do not address how trust is subjectively cultivated and experienced, culturally embedded, and inherently relational. This paper proposes some expanded principles for trust in AI that can be incorporated into common development methods and frame trust as a dynamic, temporal relationship, which involves transparency and mutual respect. We draw on relational ethics and, in particular, African communitarian philosophies, to foreground the nuances of inclusive, participatory processes and long-term relationships with communities. Involving communities throughout the AI lifecycle can foster meaningful relationships with AI design and development teams that incrementally build trust and promote more equitable and context-sensitive AI systems. We illustrate how trust-enabling principles based on African relational ethics can be operationalised, using two use-cases for AI: healthcare and education.
翻译:主流方法(如欧盟的"可信人工智能框架")将信任视为一种可根据规范性和技术性标准进行设计、评估和治理的属性。这些方法未能解决信任如何被主观培养与体验、如何根植于文化背景以及其固有的关系性本质。本文提出了一套可融入常规开发流程的扩展性人工智能信任原则,将信任构建为一种动态的、历时性的关系,其中包含透明度与相互尊重。我们借鉴关系伦理学,特别是非洲社群主义哲学,以凸显包容性参与过程及与社区建立长期关系的细微差别。在人工智能全生命周期中持续吸纳社区参与,能够培育与AI设计开发团队之间的实质性关系,从而逐步建立信任,促进更具公平性和情境敏感性的AI系统。我们通过医疗保健和教育两个AI应用案例,阐释了基于非洲关系伦理学的信任赋能原则如何实现实践操作。