An increasing number of regulations propose AI audits as a mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently lacks agreed-upon practices, procedures, taxonomies, and standards. We propose the criterion audit as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that mitigate harms and uphold human values. We discuss the necessary conditions for the criterion audit and provide a procedural blueprint for performing an audit engagement in practice. We illustrate how this framework can be adapted to current regulations by deriving the criteria on which bias audits can be performed for in-scope hiring algorithms, as required by the recently effective New York City Local Law 144 of 2021. We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing where robust guardrails against quality assurance issues are only starting to emerge. Our discussion -- informed by experiences in performing these audits in practice -- highlights the critical role that an audit ecosystem plays in ensuring the effectiveness of audits.
翻译:越来越多的法规将人工智能审计作为实现人工智能系统透明度和问责制的机制。尽管围绕各种形式的人工智能审计已形成一些趋同的规范,但当前以合规与保障为目的的审计仍缺乏公认的实践、程序、分类体系和标准。我们提出准则审计作为一种可操作的合规与保障外部审计框架。该方法的部分要素借鉴了金融审计实践,我们认为人工智能审计同样应向利益相关方提供保障,证明人工智能组织有能力通过治理其算法来减轻危害并维护人类价值观。我们讨论了准则审计的必要条件,并提供了实践中执行审计业务的程序蓝图。通过以2021年纽约市第144号地方法规(近期生效)要求为例,推导出对适用范围的招聘算法进行偏差审计的准则,我们展示了该框架如何适配现行法规。最后,我们对将更成熟的金融审计行业实践应用于人工智能审计的益处、固有局限和实施挑战进行了批判性讨论——当前针对质量保障问题的稳健防护措施才刚刚兴起。基于实际审计经验,我们的讨论强调了审计生态系统在确保审计有效性方面发挥的关键作用。