Federated recommendations (FRs), facilitating multiple local clients to collectively learn a global model without disclosing user private data, have emerged as a prevalent on-device service. In conventional FRs, a dominant paradigm is to utilize discrete identities to represent clients and items, which are then mapped to domain-specific embeddings to participate in model training. Despite considerable performance, we reveal three inherent limitations that can not be ignored in federated settings, i.e., non-transferability across domains, ineffectiveness in cold-start settings, and potential privacy violations during federated training. To this end, we propose a transferable federated recommendation model, TransFR, which delicately incorporates the general capabilities empowered by pre-trained models and the personalized abilities by fine-tuning local private data. Specifically, it first learns domain-agnostic representations of items by exploiting pre-trained models with public textual corpora. To tailor for FR tasks, we further introduce efficient federated adapter-tuning and test-time adaptation mechanisms, which facilitate personalized local adapters for each client by fitting their private data distributions. We theoretically prove the advantages of incorporating adapter tuning in FRs regarding both effectiveness and privacy. Through extensive experiments, we show that our TransFR model surpasses several state-of-the-art FRs on transferability.
翻译:联邦推荐(FR)通过促进多个本地客户端在不泄露用户隐私数据的情况下共同学习全局模型,已成为一种流行的设备端服务。在传统联邦推荐中,主流范式是利用离散标识符表示客户端和物品,随后将其映射为领域特定的嵌入向量以参与模型训练。尽管取得了显著性能,我们揭示了联邦设置下三个不可忽视的内在局限性:跨领域不可迁移性、冷启动场景下的低效性,以及联邦训练过程中潜在的隐私泄露风险。为此,我们提出一种可迁移的联邦推荐模型TransFR,该模型巧妙融合了预训练模型赋予的通用能力与本地私有数据微调带来的个性化能力。具体而言,它首先通过利用公开文本语料库的预训练模型来学习物品的领域无关表示。为适配联邦推荐任务,我们进一步引入高效的联邦适配器调优机制和测试时自适应机制,通过拟合各客户端的私有数据分布来构建个性化的本地适配器。我们从理论上证明了在联邦推荐中引入适配器调优在效果与隐私保护方面的双重优势。大量实验表明,我们的TransFR模型在可迁移性方面超越了多种最先进的联邦推荐方法。