The deployment of AI systems faces three critical governance challenges that current frameworks fail to adequately address. First, organizations struggle with inadequate risk assessment at the use case level, exemplified by the Humana class action lawsuit and other high impact cases where an AI system deployed to production exhibited both significant bias and high error rates, resulting in improper healthcare claim denials. Each AI use case presents unique risk profiles requiring tailored governance, yet most frameworks provide one size fits all guidance. Second, existing frameworks like ISO 42001 and NIST AI RMF remain at high conceptual levels, offering principles without actionable controls, leaving practitioners unable to translate governance requirements into specific technical implementations. Third, organizations lack mechanisms for operationalizing governance at scale, with no systematic approach to embed trustworthy AI practices throughout the development lifecycle, measure compliance quantitatively, or provide role-appropriate visibility from boards to data scientists. We present AI TIPS, Artificial Intelligence Trust-Integrated Pillars for Sustainability 2.0, update to the comprehensive operational framework developed in 2019,four years before NIST's AI Risk Management Framework, that directly addresses these challenges.
翻译:人工智能系统的部署面临着三个关键的治理挑战,而当前的框架未能充分解决这些问题。首先,组织在用例层面面临风险评估不足的困境,Humana集体诉讼案及其他高影响案例即是明证:在这些案例中,部署到生产环境的人工智能系统既表现出显著偏见,又具有高错误率,导致不当的医疗保健索赔拒绝。每个人工智能用例都呈现出独特的风险特征,需要量身定制的治理,然而大多数框架提供的却是“一刀切”的指导。其次,现有的框架(如ISO 42001和NIST AI RMF)仍停留在较高的概念层面,仅提供原则而缺乏可操作的控制措施,使得从业者无法将治理要求转化为具体的技术实现。第三,组织缺乏大规模实施治理的机制,没有系统性的方法在整个开发生命周期中嵌入可信人工智能实践、定量衡量合规性,或为从董事会到数据科学家的不同角色提供适当的信息可见性。我们在此提出AI TIPS 2.0,即“人工智能可持续性信任集成支柱框架2.0”。这是对2019年开发的综合操作框架的更新(该框架早于NIST人工智能风险管理框架四年),旨在直接应对上述挑战。