Federated learning has emerged as a promising privacy-preserving solution for machine learning domains that rely on user interactions, particularly recommender systems and online learning to rank. While there has been substantial research on the privacy of traditional federated learning, little attention has been paid to the privacy properties of these interaction-based settings. In this work, we show that users face an elevated risk of having their private interactions reconstructed by the central server when the server can control the training features of the items that users interact with. We introduce RAIFLE, a novel optimization-based attack framework where the server actively manipulates the features of the items presented to users to increase the success rate of reconstruction. Our experiments with federated recommendation and online learning-to-rank scenarios demonstrate that RAIFLE is significantly more powerful than existing reconstruction attacks like gradient inversion, achieving high performance consistently in most settings. We discuss the pros and cons of several possible countermeasures to defend against RAIFLE in the context of interaction-based federated learning. Our code is open-sourced at https://github.com/dzungvpham/raifle.
翻译:联邦学习已成为依赖用户交互的机器学习领域(特别是推荐系统和在线排序学习)中一种有前景的隐私保护解决方案。尽管针对传统联邦学习的隐私问题已有大量研究,但这些基于交互场景的隐私特性却鲜受关注。本研究表明,当中央服务器能够控制用户交互项目的训练特征时,用户面临其私有交互被服务器重构的更高风险。我们提出RAIFLE——一种新颖的基于优化的攻击框架,其中服务器主动操纵呈现给用户的项目特征以提高重构成功率。我们在联邦推荐和在线排序学习场景中的实验表明,RAIFLE比梯度反演等现有重构攻击方法更为强大,在大多数设定中均能持续实现高性能表现。我们讨论了在基于交互的联邦学习背景下,若干可能防御RAIFLE对策的优缺点。代码已开源:https://github.com/dzungvpham/raifle。