Collaborative-learning-based recommender systems emerged following the success of collaborative learning techniques such as Federated Learning (FL) and Gossip Learning (GL). In these systems, users participate in the training of a recommender system while maintaining their history of consumed items on their devices. While these solutions seemed appealing for preserving the privacy of the participants at first glance, recent studies have revealed that collaborative learning can be vulnerable to various privacy attacks. In this paper, we study the resilience of collaborative learning-based recommender systems against a novel privacy attack called Community Detection Attack (CDA). This attack enables an adversary to identify community members based on a chosen set of items (eg., identifying users interested in specific points-of-interest). Through experiments on three real recommendation datasets using two state-of-the-art recommendation models, we evaluate the sensitivity of an FL-based recommender system as well as two flavors of Gossip Learning-based recommender systems to CDA. The results show that across all models and datasets, the FL setting is more vulnerable to CDA compared to Gossip settings. Furthermore, we assess two off-the-shelf mitigation strategies, namely differential privacy (DP) and a \emph{Share less} policy, which consists of sharing a subset of less sensitive model parameters. The findings indicate a more favorable privacy-utility trade-off for the \emph{Share less} strategy, particularly in FedRecs.
翻译:基于联邦学习(FL)与八卦学习(GL)等协同学习技术的推荐系统,因协同学习技术的成功应用而应运而生。在此类系统中,用户参与推荐系统训练的同时,其消费物品历史数据保留在本地设备上。尽管这些方案最初看似能够有效保护参与者隐私,但近期研究表明协同学习易遭受多种隐私攻击。本文研究了基于协同学习的推荐系统对一种新型隐私攻击——社区检测攻击(CDA)的韧性。该攻击使攻击者能够通过选定的物品集合来识别社区成员(例如,识别对特定兴趣点感兴趣的用户)。通过使用两种最先进的推荐模型,在三个真实推荐数据集上进行实验,我们评估了基于FL的推荐系统以及两种八卦学习推荐系统变体对CDA的敏感性。结果表明,在所有模型和数据集上,FL设置相比八卦设置更容易遭受CDA攻击。此外,我们评估了两种现成的缓解策略:差分隐私(DP)和“减少共享”策略,即共享较不敏感的模型参数子集。结果显示,“减少共享”策略,尤其是在联邦推荐系统(FedRecs)中,实现了更优的隐私-效用权衡。