The increasing exploitation of Artificial Intelligence (AI) enabled systems in critical domains has made trustworthiness concerns a paramount showstopper, requiring verifiable accountability, often by regulation (e.g., the EU AI Act). Classical software verification and validation techniques, such as procedural audits, formal methods, or model documentation, are the mechanisms used to achieve this. However, these methods are either expensive or heavily manual and ill-suited for the opaque, "black box" nature of most AI models. An intractable conflict emerges: high auditability and verifiability are required by law, but such transparency conflicts with the need to protect assets being audited-e.g., confidential data and proprietary models-leading to weakened accountability. To address this challenge, this paper introduces ZKMLOps, a novel MLOps verification framework that operationalizes Zero-Knowledge Proofs (ZKPs)-cryptographic protocols allowing a prover to convince a verifier that a statement is true without revealing additional information-within Machine-Learning Operations lifecycles. By integrating ZKPs with established software engineering patterns, ZKMLOps provides a modular and repeatable process for generating verifiable cryptographic proof of compliance. We evaluate the framework's practicality through a study of regulatory compliance in financial risk auditing and assess feasibility through an empirical evaluation of top ZKP protocols, analyzing performance trade-offs for ML models of increasing complexity.
翻译:人工智能(AI)赋能系统在关键领域的日益广泛应用,使得可信性问题成为至关重要的阻碍因素,通常需要通过法规(如欧盟《人工智能法案》)要求可验证的问责机制。传统的软件验证与确认技术,如程序审计、形式化方法或模型文档,是用于实现这一目标的机制。然而,这些方法要么成本高昂,要么高度依赖人工,且不适用于大多数AI模型不透明的“黑箱”特性。一个棘手的矛盾由此产生:法律要求高度的可审计性与可验证性,但这种透明度与保护被审计资产(如机密数据和专有模型)的需求相冲突,导致问责制被削弱。为应对这一挑战,本文提出ZKMLOps,一种新颖的机器学习运维(MLOps)验证框架,该框架将零知识证明(ZKPs)——一种允许证明者在不泄露额外信息的情况下使验证者确信陈述真实性的密码学协议——操作化地融入机器学习运维生命周期。通过将ZKPs与成熟的软件工程模式相结合,ZKMLOps提供了一个模块化且可重复的过程,用于生成可验证的合规性密码学证明。我们通过金融风险审计中的法规合规性研究评估该框架的实用性,并通过实证评估主流ZKP协议来评估其可行性,分析针对复杂度递增的机器学习模型的性能权衡。