Fairness monitoring is critical for detecting algorithmic bias, as mandated by the EU AI Act. Since such monitoring requires sensitive user data (e.g., ethnicity), the AI Act permits its processing only with strict privacy measures, such as multi-party computation (MPC), in compliance with the GDPR. However, the effectiveness of such secure monitoring protocols ultimately depends on people's willingness to share their data. Little is known about how different MPC protocol designs shape user acceptance. To address this, we conducted an online survey with 833 participants in Europe, examining user acceptance of various MPC protocol designs for fairness monitoring. Findings suggest that users prioritized risk-related attributes (e.g., privacy protection mechanism) in direct evaluation but benefit-related attributes (e.g., fairness objective) in simulated choices, with acceptance shaped by their fairness and privacy orientations. We derive implications for deploying and communicating privacy-preserving protocols in ways that foster informed consent and align with user expectations.
翻译:公平性监测对于检测算法偏见至关重要,欧盟《人工智能法案》已对此作出强制要求。由于此类监测需要处理敏感用户数据(如种族信息),《人工智能法案》仅允许在符合《通用数据保护条例》的前提下,采用严格隐私保护措施(如多方计算)进行处理。然而,此类安全监测协议的实际有效性最终取决于人们共享数据的意愿。目前关于不同多方计算协议设计如何影响用户接受度的研究尚不充分。为此,我们在欧洲开展了涉及833名参与者的在线调查,系统考察了用户对多种多方计算协议设计在公平性监测场景中的接受度。研究发现:用户在直接评估时更关注风险相关属性(如隐私保护机制),而在模拟选择中更重视效益相关属性(如公平性目标),其接受度同时受到个体公平取向与隐私取向的塑造。本研究为隐私保护协议的部署与传播提供了重要启示,提出了既能促进知情同意又符合用户期望的实施路径。