Decentralized AI systems, such as federated learning, can play a critical role in further unlocking AI asset marketplaces (e.g., healthcare data marketplaces) thanks to increased asset privacy protection. Unlocking this big potential necessitates governance mechanisms that are transparent, scalable, and verifiable. However current governance approaches rely on bespoke, infrastructure-specific policies that hinder asset interoperability and trust among systems. We are proposing a Technical Policy Blueprint that encodes governance requirements as policy-as-code objects and separates asset policy verification from asset policy enforcement. In this architecture the Policy Engine verifies evidence (e.g., identities, signatures, payments, trusted-hardware attestations) and issues capability packages. Asset Guardians (e.g. data guardians, model guardians, computation guardians, etc.) enforce access or execution solely based on these capability packages. This core concept of decoupling policy processing from capabilities enables governance to evolve without reconfiguring AI infrastructure, thus creating an approach that is transparent, auditable, and resilient to change.
翻译:去中心化人工智能系统(如联邦学习)能够通过增强资产隐私保护,在进一步释放人工智能资产市场(例如医疗数据市场)潜力方面发挥关键作用。实现这一巨大潜力需要透明、可扩展且可验证的治理机制。然而,当前治理方法依赖于定制化、基础设施特定的政策,阻碍了资产互操作性及系统间信任。我们提出一种技术政策蓝图,将治理要求编码为"政策即代码"对象,并将资产政策验证与资产政策执行相分离。在该架构中,策略引擎负责验证证据(如身份、签名、支付、可信硬件证明)并颁发能力包。资产守护者(如数据守护者、模型守护者、计算守护者等)仅依据这些能力包执行访问或运算控制。这种将策略处理与能力解耦的核心设计,使得治理机制能够在无需重新配置人工智能基础设施的情况下持续演进,从而构建出透明、可审计且具备变革适应性的治理方案。