Transformer models have gained significant attention due to their power in machine learning tasks. Their extensive deployment has raised concerns about the potential leakage of sensitive information during inference. However, when being applied to Transformers, existing approaches based on secure two-party computation (2PC) bring about efficiency limitations in two folds: (1) resource-intensive matrix multiplications in linear layers, and (2) complex non-linear activation functions like $\mathsf{GELU}$ and $\mathsf{Softmax}$. This work presents a new two-party inference framework $\mathsf{Nimbus}$ for Transformer models. For the linear layer, we propose a new 2PC paradigm along with an encoding approach to securely compute matrix multiplications based on an outer-product insight, which achieves $2.9\times \sim 12.5\times$ performance improvements compared to the state-of-the-art (SOTA) protocol. For the non-linear layer, through a new observation of utilizing the input distribution, we propose an approach of low-degree polynomial approximation for $\mathsf{GELU}$ and $\mathsf{Softmax}$, which improves the performance of the SOTA polynomial approximation by $2.9\times \sim 4.0\times$, where the average accuracy loss of our approach is 0.08\% compared to the non-2PC inference without privacy. Compared with the SOTA two-party inference, $\mathsf{Nimbus}$ improves the end-to-end performance of \bert{} inference by $2.7\times \sim 4.7\times$ across different network settings.
翻译:Transformer模型因其在机器学习任务中的强大能力而受到广泛关注。其大规模部署引发了人们对推理过程中敏感信息潜在泄露的担忧。然而,现有基于安全两方计算(2PC)的方法在应用于Transformer时存在两方面的效率限制:(1)线性层中计算密集的矩阵乘法运算;(2)复杂的非线性激活函数,如$\mathsf{GELU}$和$\mathsf{Softmax}$。本文提出了一种面向Transformer模型的新型两方推理框架$\mathsf{Nimbus}$。针对线性层,我们基于外积视角提出了一种新的2PC范式及编码方法,以安全计算矩阵乘法,相比现有最优(SOTA)协议实现了$2.9\times \sim 12.5\times$的性能提升。针对非线性层,通过利用输入分布的新观察,我们提出了针对$\mathsf{GELU}$和$\mathsf{Softmax}$的低次多项式逼近方法,将SOTA多项式逼近性能提升了$2.9\times \sim 4.0\times$,且相较于无隐私保护的原始推理,本方法的平均精度损失仅为0.08%。与SOTA两方推理方案相比,$\mathsf{Nimbus}$在不同网络设置下将\bert{}推理的端到端性能提升了$2.7\times \sim 4.7\times$。