Privacy-preserving AI algorithms are widely adopted in various domains, but the lack of transparency might pose accountability issues. While auditing algorithms can address this issue, machine-based audit approaches are often costly and time-consuming. Herd audit, on the other hand, offers an alternative solution by harnessing collective intelligence. Nevertheless, the presence of epistemic disparity among auditors, resulting in varying levels of expertise and access to knowledge, may impact audit performance. An effective herd audit will establish a credible accountability threat for algorithm developers, incentivizing them to uphold their claims. In this study, our objective is to develop a systematic framework that examines the impact of herd audits on algorithm developers using the Stackelberg game approach. The optimal strategy for auditors emphasizes the importance of easy access to relevant information, as it increases the auditors' confidence in the audit process. Similarly, the optimal choice for developers suggests that herd audit is viable when auditors face lower costs in acquiring knowledge. By enhancing transparency and accountability, herd audit contributes to the responsible development of privacy-preserving algorithms.
翻译:隐私保护型人工智能算法已广泛应用于多个领域,但缺乏透明度可能引发问责问题。虽然算法审计可以解决这一问题,但基于机器的审计方法往往成本高昂且耗时。相比之下,群体审计通过利用集体智慧提供了替代方案。然而,审计者之间存在认知差异(导致专业知识水平和知识获取渠道的差异)可能影响审计绩效。有效的群体审计将为算法开发者建立可信的问责威胁机制,激励其恪守自身承诺。本研究旨在运用斯塔克尔伯格博弈方法,构建系统化框架以考察群体审计对算法开发者的影响。审计者最优策略强调便捷获取相关信息的重要性,因为这能够增强审计者在审计过程中的信心。同样地,开发者最优选择表明:当审计者获取知识成本较低时,群体审计具有可行性。通过提升透明度与问责有效性,群体审计将推动隐私保护型算法的负责任开发。