User-centric recommendation has become essential for delivering personalized services, as it enables systems to adapt to users' evolving behaviors while respecting their long-term preferences and privacy constraints. Although federated learning offers a promising alternative to centralized training, existing approaches largely overlook user behavior dynamics, leading to temporal forgetting and weakened collaborative personalization. In this work, we propose FCUCR, a federated continual recommendation framework designed to support long-term personalization in a privacy-preserving manner. To address temporal forgetting, we introduce a time-aware self-distillation strategy that implicitly retains historical preferences during local model updates. To tackle collaborative personalization under heterogeneous user data, we design an inter-user prototype transfer mechanism that enriches each client's representation using knowledge from similar users while preserving individual decision logic. Extensive experiments on four public benchmarks demonstrate the superior effectiveness of our approach, along with strong compatibility and practical applicability. Code is available.
翻译:用户中心化推荐已成为提供个性化服务的关键,它使系统能够适应用户不断变化的行为,同时尊重其长期偏好与隐私约束。尽管联邦学习为集中式训练提供了一种有前景的替代方案,现有方法大多忽视了用户行为的动态性,导致时序遗忘与协作个性化能力减弱。本研究提出FCUCR,一种联邦持续推荐框架,旨在以隐私保护的方式支持长期个性化。为解决时序遗忘问题,我们引入一种时间感知的自蒸馏策略,在本地模型更新过程中隐式保留历史偏好。为应对异构用户数据下的协作个性化挑战,我们设计了一种用户间原型迁移机制,利用相似用户的知识丰富每个客户端的表征,同时保留个体决策逻辑。在四个公开基准数据集上的大量实验证明了本方法的优越性能,同时展现出强大的兼容性与实际适用性。代码已公开。