In an MPC-protected distributed computation, although the use of MPC assures data privacy during computation, sensitive information may still be inferred by curious MPC participants from the computation output. This can be observed, for instance, in the inference attacks on either federated learning or a more standard statistical computation with distributed inputs. In this work, we address this output privacy issue by proposing a discrete and bounded Laplace-inspired perturbation mechanism along with a secure realization of this mechanism using MPC. The proposed mechanism strictly adheres to a zero failure probability, overcoming the limitation encountered on other existing bounded and discrete variants of Laplace perturbation. We provide analyses of the proposed differential privacy (DP) perturbation in terms of its privacy and utility. Additionally, we designed MPC protocols to implement this mechanism and presented performance benchmarks based on our experimental setup. The MPC realization of the proposed mechanism exhibits a complexity similar to the state-of-the-art discrete Gaussian mechanism, which can be considered an alternative with comparable efficiency while providing stronger differential privacy guarantee. Moreover, efficiency of the proposed scheme can be further enhanced by performing the noise generation offline while leaving the perturbation phase online.
翻译:在受MPC保护的分布式计算中,尽管MPC的使用确保了计算过程中的数据隐私,但好奇的MPC参与者仍可能从计算结果中推断出敏感信息。例如,在联邦学习或具有分布式输入的更标准统计计算中,都可以观察到此类推断攻击。在本工作中,我们通过提出一种离散且有界的拉普拉斯启发式扰动机制,以及使用MPC实现该机制的安全方案,来解决这一输出隐私问题。所提出的机制严格遵循零故障概率,克服了其他现有有界离散拉普拉斯扰动变体遇到的限制。我们从隐私性和效用性两方面对所提出的差分隐私(DP)扰动机制进行了分析。此外,我们设计了MPC协议来实现该机制,并基于实验设置给出了性能基准测试。所提出机制的MPC实现具有与最先进的离散高斯机制相似的复杂度,后者可被视为一种具有相当效率的替代方案,同时提供更强的差分隐私保证。此外,通过离线执行噪声生成而将扰动阶段保持在线,可以进一步提升所提方案的效率。