Artificial intelligence surrogates are systems designed to infer preferences when individuals lose decision-making capacity. Fairness in such systems is a domain that has been insufficiently explored. Traditional algorithmic fairness frameworks are insufficient for contexts where decisions are relational, existential, and culturally diverse. This paper explores an ethical framework for algorithmic fairness in AI surrogates by mapping major fairness notions onto potential real-world end-of-life scenarios. It then examines fairness across moral traditions. The authors argue that fairness in this domain extends beyond parity of outcomes to encompass moral representation, fidelity to the patient's values, relationships, and worldview.
翻译:人工智能代理系统旨在推断个体丧失决策能力时的偏好。此类系统中的公平性是一个尚未充分探索的领域。传统算法公平性框架不足以应对具有关系性、存在性及文化多样性特征的决策情境。本文通过将主要公平性概念映射至现实世界临终情境,探讨了人工智能代理中算法公平性的伦理框架,进而考察了不同道德传统下的公平性内涵。作者主张,该领域的公平性不仅包含结果平等,更应涵盖道德代表性、对患者价值观的忠实反映、关系网络及世界观的全面体现。