The rapid adoption of complex AI systems has outpaced the development of tools to ensure their transparency, security, and regulatory compliance. In this paper, the AI Bill of Materials (AIBOM), an extension of the Software Bill of Materials (SBOM), is introduced as a standardized, verifiable record of trained AI models and their environments. Our proof-of-concept platform, AIBoMGen, automates the generation of signed AIBOMs by capturing datasets, model metadata, and environment details during training. The training platform acts as a neutral, third-party observer and root of trust. It enforces verifiable AIBOM creation for every job. The system uses cryptographic hashing, digital signatures, and in-toto attestations to ensure integrity and protect against threats such as artifact tampering by dishonest model creators. Our evaluation demonstrates that AIBoMGen reliably detects unauthorized modifications to all artifacts and can generate AIBOMs with negligible performance overhead. These results highlight the potential of AIBoMGen as a foundational step toward building secure and transparent AI ecosystems, enabling compliance with regulatory frameworks like the EUs AI Act.
翻译:复杂人工智能系统的快速普及已超过确保其透明度、安全性和监管合规性工具的发展速度。本文引入人工智能物料清单(AIBOM)——软件物料清单(SBOM)的扩展——作为经过训练的AI模型及其环境的标准化、可验证记录。我们的概念验证平台AIBoMGen通过在训练过程中捕获数据集、模型元数据和环境细节,自动生成经签名的AIBOM。该训练平台作为中立的第三方观察者和信任根,为每个训练任务强制执行可验证的AIBOM创建。系统采用密码学哈希、数字签名和in-toto证明来确保完整性,并防范不诚实模型创建者篡改构件等威胁。评估结果表明,AIBoMGen能可靠检测对所有构件的未授权修改,且生成AIBOM的性能开销可忽略不计。这些结果凸显了AIBoMGen作为构建安全透明AI生态系统基础步骤的潜力,能够助力实现欧盟《人工智能法案》等监管框架的合规要求。