This paper proposes CIRCLE, a six-stage, lifecycle-based framework to bridge the reality gap between model-centric performance metrics and AI's materialized outcomes in deployment. While existing frameworks like MLOps focus on system stability and benchmarks measure abstract capabilities, decision-makers outside the AI stack lack systematic evidence about the behavior of AI technologies under real-world user variability and constraints. CIRCLE operationalizes the Validation phase of TEVV (Test, Evaluation, Verification, and Validation) by formalizing the translation of stakeholder concerns outside the stack into measurable signals. Unlike participatory design, which often remains localized, or algorithmic audits, which are often retrospective, CIRCLE provides a structured, prospective protocol for linking context-sensitive qualitative insights to scalable quantitative metrics. By integrating methods such as field testing, red teaming, and longitudinal studies into a coordinated pipeline, CIRCLE produces systematic knowledge: evidence that is comparable across sites yet sensitive to local context. This can enable governance based on materialized downstream effects rather than theoretical capabilities.
翻译:本文提出CIRCLE——一个基于生命周期的六阶段框架,旨在弥合模型中心性能指标与AI在部署中实际成效之间的现实鸿沟。尽管现有框架(如MLOps)关注系统稳定性,基准测试衡量抽象能力,但AI技术栈外的决策者仍缺乏关于AI技术在真实世界用户多样性和约束条件下行为的系统性证据。CIRCLE通过将技术栈外部利益相关者的关切形式化为可测量信号,实现了TEVV(测试、评估、验证与确认)中验证阶段的可操作化。与通常局限于局部范围的参与式设计或常属事后追溯的算法审计不同,CIRCLE提供了一种结构化的前瞻性方案,能够将情境敏感的定性洞察与可扩展的定量指标相连接。通过将现场测试、红队演练和纵向研究等方法整合至协调流程中,CIRCLE生成系统性知识:这些证据既具备跨场景可比性,又能保持对局部情境的敏感性。该框架可使治理机制基于实际产生的下游效应而非理论能力建立。