This paper examines the assessment challenges of Responsible AI (RAI) governance efforts in globally decentralized organizations through a case study collaboration between a leading research university and a multinational enterprise. While there are many proposed frameworks for RAI, their application in complex organizational settings with distributed decision-making authority remains underexplored. Our RAI assessment, conducted across multiple business units and AI use cases, reveals four key patterns that shape RAI implementation: (1) complex interplay between group-level guidance and local interpretation, (2) challenges translating abstract principles into operational practices, (3) regional and functional variation in implementation approaches, and (4) inconsistent accountability in risk oversight. Based on these findings, we propose an Adaptive RAI Governance (ARGO) Framework that balances central coordination with local autonomy through three interdependent layers: shared foundation standards, central advisory resources, and contextual local implementation. We contribute insights from academic-industry collaboration for RAI assessments, highlighting the importance of modular governance approaches that accommodate organizational complexity while maintaining alignment with responsible AI principles. These lessons offer practical guidance for organizations navigating the transition from RAI principles to operational practice within decentralized structures.
翻译:本文通过一所顶尖研究型大学与一家跨国企业的案例研究合作,探讨了全球性去中心化组织中负责任人工智能治理举措的评估挑战。尽管已有许多针对负责任人工智能的框架被提出,但它们在具有分布式决策权的复杂组织环境中的应用仍未得到充分探索。我们在多个业务单元和人工智能应用场景中开展的负责任人工智能评估揭示了影响其实施的四个关键模式:(1) 组织层面指导与地方性解读之间复杂的相互作用,(2) 将抽象原则转化为操作实践所面临的挑战,(3) 实施方法在区域和功能上的差异,以及(4) 风险监督中的责任归属不一致。基于这些发现,我们提出了一个自适应负责任人工智能治理框架,该框架通过三个相互依存的层面——共享基础标准、中央咨询资源和情境化地方实施——来平衡中央协调与地方自主权。我们贡献了来自学术界与产业界合作进行负责任人工智能评估的见解,强调了模块化治理方法的重要性,这些方法既能适应组织的复杂性,又能保持与负责任人工智能原则的一致性。这些经验为组织在去中心化结构内实现从负责任人工智能原则到操作实践的转型提供了实用指导。