This paper addresses privacy protection in decentralized Artificial Intelligence (AI) using Confidential Computing (CC) within the Atoma Network, a decentralized AI platform designed for the Web3 domain. Decentralized AI distributes AI services among multiple entities without centralized oversight, fostering transparency and robustness. However, this structure introduces significant privacy challenges, as sensitive assets such as proprietary models and personal data may be exposed to untrusted participants. Cryptography-based privacy protection techniques such as zero-knowledge machine learning (zkML) suffers prohibitive computational overhead. To address the limitation, we propose leveraging Confidential Computing (CC). Confidential Computing leverages hardware-based Trusted Execution Environments (TEEs) to provide isolation for processing sensitive data, ensuring that both model parameters and user data remain secure, even in decentralized, potentially untrusted environments. While TEEs face a few limitations, we believe they can bridge the privacy gap in decentralized AI. We explore how we can integrate TEEs into Atoma's decentralized framework.
翻译:本文探讨了在去中心化人工智能(AI)中利用机密计算(CC)进行隐私保护的问题,研究基于Atoma Network这一为Web3领域设计的去中心化AI平台。去中心化AI将AI服务分布于多个实体之间,无需集中式监管,从而提升了透明度和鲁棒性。然而,这种架构也带来了显著的隐私挑战,因为专有模型和个人数据等敏感资产可能暴露给不可信的参与者。基于密码学的隐私保护技术,如零知识机器学习(zkML),存在过高的计算开销。为应对这一局限,我们提出利用机密计算(CC)。机密计算借助基于硬件的可信执行环境(TEE)为处理敏感数据提供隔离保护,确保即使在去中心化且可能不可信的环境中,模型参数和用户数据也能保持安全。尽管TEE存在一些限制,但我们相信它们能够弥合去中心化AI中的隐私鸿沟。我们探讨了如何将TEE集成到Atoma的去中心化框架中。