Complex decision-making by autonomous machines and algorithms could underpin the foundations of future society. Generative AI is emerging as a powerful engine for such transitions. However, we show that Generative AI-driven developments pose a critical pitfall: fairness concerns. In robotic applications, although intuitions about fairness are common, a precise and implementable definition that captures user utility and inherent data randomness is missing. Here we provide a utility-aware fairness metric for robotic decision making and analyze fairness jointly with user-data privacy, deriving conditions under which privacy budgets govern fairness metrics. This yields a unified framework that formalizes and quantifies fairness and its interplay with privacy, which is tested in a robot navigation task. In view of the fact that under legal requirements, most robotic systems will enforce user privacy, the approach shows surprisingly that such privacy budgets can be jointly used to meet fairness targets. Addressing fairness concerns in the creative combined consideration of privacy is a step towards ethical use of AI and strengthens trust in autonomous robots deployed in everyday environments.
翻译:自主机器与算法的复杂决策可能构成未来社会的基石。生成式人工智能正成为推动此类变革的强大引擎。然而,我们证明生成式人工智能驱动的发展存在一个关键缺陷:公平性问题。在机器人应用中,尽管关于公平性的直觉认知普遍存在,但尚缺乏能够同时捕捉用户效用与内在数据随机性的精确且可实施的定义。本文提出一种面向机器人决策的效用感知公平性度量方法,并将公平性与用户数据隐私进行联合分析,推导出隐私预算支配公平性度量的条件。由此构建了一个形式化量化公平性及其与隐私交互关系的统一框架,并在机器人导航任务中进行了验证。鉴于法律要求下大多数机器人系统将强制执行用户隐私保护,该方法意外揭示此类隐私预算可被协同用于达成公平性目标。通过在创新性结合隐私考量的框架下解决公平性问题,我们朝着人工智能的伦理应用迈进一步,并增强了对日常环境中部署的自主机器人的信任度。