The increasing integration of Artificial Intelligence across multiple industry sectors necessitates robust mechanisms for ensuring transparency, trust, and auditability of its development and deployment. This topic is particularly important in light of recent calls in various jurisdictions to introduce regulation and legislation on AI safety. In this paper, we propose a framework for complete verifiable AI pipelines, identifying key components and analyzing existing cryptographic approaches that contribute to verifiability across different stages of the AI lifecycle, from data sourcing to training, inference, and unlearning. This framework could be used to combat misinformation by providing cryptographic proofs alongside AI-generated assets to allow downstream verification of their provenance and correctness. Our findings underscore the importance of ongoing research to develop cryptographic tools that are not only efficient for isolated AI processes, but that are efficiently `linkable' across different processes within the AI pipeline, to support the development of end-to-end verifiable AI technologies.
翻译:人工智能在多个行业领域的日益融合,亟需建立确保其开发与部署过程透明性、可信度及可审计性的稳健机制。鉴于近期多个司法管辖区呼吁对人工智能安全制定监管法规,这一议题尤为重要。本文提出一个完整可验证人工智能流程的框架,识别关键组件并分析现有密码学方法,这些方法有助于实现人工智能生命周期各阶段(从数据采集、训练、推理到遗忘学习)的可验证性。该框架可通过为人工智能生成资产附加密码学证明,支持下游对其来源与正确性的验证,从而用于对抗虚假信息。我们的研究结果强调,持续开发密码学工具的研究至关重要,这些工具不仅需对孤立的人工智能流程保持高效,更需在人工智能流程内部不同环节间实现高效“可链接”,以支持端到端可验证人工智能技术的发展。