Vertical Federated Learning (VFL) aims to enable collaborative training of deep learning models while maintaining privacy protection. However, the VFL procedure still has components that are vulnerable to attacks by malicious parties. In our work, we consider feature reconstruction attacks, a common risk targeting input data compromise. We theoretically claim that feature reconstruction attacks cannot succeed without knowledge of the prior distribution on data. Consequently, we demonstrate that even simple model architecture transformations can significantly impact the protection of input data during VFL. Confirming these findings with experimental results, we show that MLP-based models are resistant to state-of-the-art feature reconstruction attacks.
翻译:纵向联邦学习(VFL)旨在实现深度学习模型的协同训练,同时保持隐私保护。然而,VFL流程中仍存在易受恶意方攻击的环节。本研究聚焦于特征重建攻击——一种旨在破坏输入数据的常见风险。我们从理论上论证,若缺乏对数据先验分布的认知,特征重建攻击将无法成功。因此,我们证明即使简单的模型架构变换也能显著提升VFL过程中输入数据的保护效果。通过实验结果验证,我们发现基于MLP的模型能够有效抵御当前最先进的特征重建攻击。