The increased deployment of machine learning inference in various applications has sparked privacy concerns. In response, private inference (PI) protocols have been created to allow parties to perform inference without revealing their sensitive data. Despite recent advances in the efficiency of PI, most current methods assume a semi-honest threat model where the data owner is honest and adheres to the protocol. However, in reality, data owners can have different motivations and act in unpredictable ways, making this assumption unrealistic. To demonstrate how a malicious client can compromise the semi-honest model, we first designed an inference manipulation attack against a range of state-of-the-art private inference protocols. This attack allows a malicious client to modify the model output with 3x to 8x fewer queries than current black-box attacks. Motivated by the attacks, we proposed and implemented RobPI, a robust and resilient private inference protocol that withstands malicious clients. RobPI integrates a distinctive cryptographic protocol that bolsters security by weaving encryption-compatible noise into the logits and features of private inference, thereby efficiently warding off malicious-client attacks. Our extensive experiments on various neural networks and datasets show that RobPI achieves ~91.9% attack success rate reduction and increases more than 10x the number of queries required by malicious-client attacks.
翻译:随着机器学习推理在各种应用中的部署日益增多,隐私问题也日益凸显。为此,隐私推理协议应运而生,使得各方能够在不泄露敏感数据的情况下执行推理。尽管隐私推理的效率在近期取得了进展,但当前大多数方法都基于半诚实威胁模型,即假设数据所有者是诚实的并遵守协议。然而,在现实中,数据所有者可能出于不同动机并以不可预测的方式行事,使得这一假设不切实际。为了展示恶意客户端如何破坏半诚实模型,我们首先针对一系列最先进的隐私推理协议设计了一种推理操纵攻击。该攻击允许恶意客户端以比当前黑盒攻击少3到8倍的查询次数来修改模型输出。受这些攻击的启发,我们提出并实现了RobPI,一个能够抵御恶意客户端的鲁棒且具有弹性的隐私推理协议。RobPI集成了一种独特的密码学协议,通过将加密兼容的噪声融入隐私推理的对数概率和特征中,从而增强了安全性,有效抵御了恶意客户端攻击。我们在多种神经网络和数据集上进行的大量实验表明,RobPI实现了约91.9%的攻击成功率降低,并将恶意客户端攻击所需的查询次数提高了10倍以上。