In this paper, we propose a new secure machine learning inference platform assisted by a small dedicated security processor, which will be easier to protect and deploy compared to today's TEEs integrated into high-performance processors. Our platform provides three main advantages over the state-of-the-art: (i) We achieve significant performance improvements compared to state-of-the-art distributed Privacy-Preserving Machine Learning (PPML) protocols, with only a small security processor that is comparable to a discrete security chip such as the Trusted Platform Module (TPM) or on-chip security subsystems in SoCs similar to the Apple enclave processor. In the semi-honest setting with WAN/GPU, our scheme is 4X-63X faster than Falcon (PoPETs'21) and AriaNN (PoPETs'22) and 3.8X-12X more communication efficient. We achieve even higher performance improvements in the malicious setting. (ii) Our platform guarantees security with abort against malicious adversaries under honest majority assumption. (iii) Our technique is not limited by the size of secure memory in a TEE and can support high-capacity modern neural networks like ResNet18 and Transformer. While previous work investigated the use of high-performance TEEs in PPML, this work represents the first to show that even tiny secure hardware with really limited performance can be leveraged to significantly speed-up distributed PPML protocols if the protocol can be carefully designed for lightweight trusted hardware.
翻译:本文提出了一种由小型专用安全处理器辅助的新型安全机器学习推理平台,相较于当前集成于高性能处理器中的可信执行环境,该平台更易于防护与部署。本平台相比现有技术具备三大优势:(i) 与最先进的分布式隐私保护机器学习协议相比,我们仅采用相当于独立安全芯片(如可信平台模块)或片上安全子系统(类似Apple enclave处理器)的小型安全处理器,即实现了显著的性能提升。在广域网/GPU半诚实设置下,本方案比Falcon(PoPETs'21)和AriaNN(PoPETs'22)快4-63倍,通信效率提升3.8-12倍。在恶意设置下我们获得了更高的性能提升。(ii) 在诚实多数假设下,本平台可保证针对恶意敌手的可中止安全。(iii) 本技术不受可信执行环境中安全内存容量的限制,能够支持ResNet18、Transformer等高容量现代神经网络。尽管先前研究已探索在隐私保护机器学习中使用高性能可信执行环境,但本工作首次证明:若协议能针对轻量级可信硬件进行精心设计,即使性能极为有限的微型安全硬件也能显著加速分布式隐私保护机器学习协议。