Federated Collaborative Filtering (FedCF) is an emerging field focused on developing a new recommendation framework with preserving privacy in a federated setting. Existing FedCF methods typically combine distributed Collaborative Filtering (CF) algorithms with privacy-preserving mechanisms, and then preserve personalized information into a user embedding vector. However, the user embedding is usually insufficient to preserve the rich information of the fine-grained personalization across heterogeneous clients. This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously. Specifically, we decompose the modeling of user knowledge into two encoders, each designed to capture shared knowledge and personalized knowledge separately. A personalized gating network is then applied to balance personalization and generalization between the global and local encoders. Moreover, to effectively train the proposed framework, we model the CF problem as a specialized Variational AutoEncoder (VAE) task by integrating user interaction vector reconstruction with missing value prediction. The decoder is trained to reconstruct the implicit feedback from items the user has interacted with, while also predicting items the user might be interested in but has not yet interacted with. Experimental results on benchmark datasets demonstrate that the proposed method outperforms other baseline methods, showcasing superior performance. Our code is available at https://github.com/mtics/FedDAE.
翻译:联邦协同过滤(FedCF)是一个新兴领域,专注于在联邦学习框架下开发兼顾隐私保护的新型推荐系统。现有FedCF方法通常将分布式协同过滤(CF)算法与隐私保护机制相结合,并将个性化信息压缩至用户嵌入向量中。然而,用户嵌入向量通常不足以完整保存异构客户端间细粒度个性化所蕴含的丰富信息。本文提出一种新颖的个性化FedCF方法,通过将用户个性化信息同时编码至隐变量与神经网络模型中。具体而言,我们将用户知识建模分解为两个编码器:分别设计用于捕获共享知识与个性化知识。随后采用个性化门控网络来平衡全局编码器与局部编码器之间的个性化与泛化能力。此外,为有效训练所提框架,我们将CF问题建模为特殊的变分自编码器(VAE)任务,通过整合用户交互向量重构与缺失值预测实现。解码器被训练以重构用户已交互项目的隐式反馈,同时预测用户可能感兴趣但尚未交互的项目。在基准数据集上的实验结果表明,所提方法优于其他基线方法,展现出卓越性能。我们的代码公开于https://github.com/mtics/FedDAE。