Privacy-preserving Transformer inference has gained attention due to the potential leakage of private information. Despite recent progress, existing frameworks still fall short of practical model scales, with gaps up to a hundredfold. A possible way to close this gap is the Mixture of Experts (MoE) architecture, which has emerged as a promising technique to scale up model capacity with minimal overhead. However, given that the current secure two-party (2-PC) protocols allow the server to homomorphically compute the FFN layer with its plaintext model weight, under the MoE setting, this could reveal which expert is activated to the server, exposing token-level privacy about the client's input. While naively evaluating all the experts before selection could protect privacy, it nullifies MoE sparsity and incurs the heavy computational overhead that sparse MoE seeks to avoid. To address the privacy and efficiency limitations above, we propose a 2-PC privacy-preserving inference framework, \SecMoE. Unifying per-entry circuits in both the MoE layer and piecewise polynomial functions, \SecMoE obliviously selects the extracted parameters from circuits and only computes one encrypted entry, which we refer to as Select-Then-Compute. This makes the model for private inference scale to 63$\times$ larger while only having a 15.2$\times$ increase in end-to-end runtime. Extensive experiments show that, under 5 expert settings, \SecMoE lowers the end-to-end private inference communication by 1.8$\sim$7.1$\times$ and achieves 1.3$\sim$3.8$\times$ speedup compared to the state-of-the-art (SOTA) protocols.
翻译:保护隐私的Transformer推理因潜在隐私信息泄露问题而受到关注。尽管近期取得进展,现有框架仍无法满足实际模型规模需求,存在高达百倍的性能差距。缩小这一差距的可能途径是混合专家(MoE)架构,该架构已成为以最小开销扩展模型容量的有效技术。然而,当前安全两方计算(2-PC)协议允许服务器使用其明文模型权重对前馈网络层进行同态计算,在MoE场景下,这将向服务器暴露专家激活状态,从而泄露客户端输入的令牌级隐私。虽然在选择前简单评估所有专家可保护隐私,但这会破坏MoE稀疏性,并产生稀疏MoE试图避免的巨大计算开销。为解决上述隐私与效率限制,我们提出两方隐私保护推理框架\SecMoE。通过统一MoE层和分段多项式函数中的逐条目电路,\SecMoE以不经意方式从电路中选取提取参数,仅计算单个加密条目,我们称之为"选择-计算"范式。这使得隐私推理模型规模扩大至63$\times$,而端到端运行时间仅增加15.2$\times$。大量实验表明,在5专家配置下,相比最先进(SOTA)协议,\SecMoE将端到端隐私推理通信开销降低1.8$\sim$7.1$\times$,并实现1.3$\sim$3.8$\times$的加速。