Human-Certified Module Repositories (HCMRs) are introduced in this work as a new architectural model for constructing trustworthy software in the era of AI-assisted development. As large language models increasingly participate in code generation, configuration synthesis, and multi-component integration, the reliability of AI-assembled systems will depend critically on the trustworthiness of the building blocks they use. Today's software supply-chain incidents and modular development ecosystems highlight the risks of relying on components with unclear provenance, insufficient review, or unpredictable composition behavior. We argue that future AI-driven development workflows require repositories of reusable modules that are curated, security-reviewed, provenance-rich, and equipped with explicit interface contracts. To this end, we propose HCMRs, a framework that blends human oversight with automated analysis to certify modules and support safe, predictable assembly by both humans and AI agents. We present a reference architecture for HCMRs, outline a certification and provenance workflow, analyze threat surfaces relevant to modular ecosystems, and extract lessons from recent failures. We further discuss implications for governance, scalability, and AI accountability, positioning HCMRs as a foundational substrate for reliable and auditable AI-constructed software systems.
翻译:本文提出人类认证模块仓库作为一种新型架构模型,旨在人工智能辅助开发时代构建可信软件。随着大语言模型日益广泛地参与代码生成、配置合成及多组件集成,AI组装系统的可靠性将从根本上取决于其所用基础构件的可信度。当前软件供应链事件与模块化开发生态系统凸显了依赖来源不明、审查不足或组合行为不可预测组件的风险。我们认为未来AI驱动的开发流程需要具备以下特性的可复用模块仓库:经过人工筛选、安全审查、来源信息完备且配备明确接口契约。为此,我们提出HCMR框架,该框架融合人工监督与自动化分析,对模块进行认证并支持人类与AI代理进行安全可预测的组装。我们提出HCMR的参考架构,阐述认证与溯源工作流程,分析模块化生态相关的威胁面,并从近期故障中总结经验教训。进一步探讨其对治理机制、可扩展性及AI问责制的影响,将HCMR定位为构建可靠且可审计的AI软件系统的基础支撑平台。