The proliferation of connected devices and privacy-sensitive applications has accelerated the adoption of Federated Learning (FL), a decentralized paradigm that enables collaborative model training without sharing raw data. While FL addresses data locality and privacy concerns, it does not inherently support data deletion requests that are increasingly mandated by regulations such as the Right to be Forgotten (RTBF). In centralized learning, this challenge has been studied under the concept of Machine Unlearning (MU), that focuses on efficiently removing the influence of specific data samples or clients from trained models. Extending this notion to federated settings has given rise to Federated Unlearning (FUL), a new research area concerned with eliminating the contributions of individual clients or data subsets from the global FL model in a distributed and heterogeneous environment. In this survey, we first introduce the fundamentals of FUL. Then, we review the FUL frameworks that are proposed to address the three main implementation challenges, i.e., communication cost, resource allocation as well as security and privacy. Furthermore, we discuss applications of FUL in the modern distributed computer networks. We also highlight the open challenges and future research opportunities. By consolidating existing knowledge and mapping open problems, this survey aims to serve as a foundational reference for researchers and practitioners seeking to advance FL to build trustworthy, regulation-compliant and user-centric federated systems.
翻译:随着互联设备和隐私敏感应用的激增,联邦学习(FL)这一去中心化范式加速普及,它使得协作模型训练无需共享原始数据。尽管FL解决了数据本地性和隐私问题,但它本身并不支持日益被《被遗忘权》(RTBF)等法规所要求的数据删除请求。在集中式学习中,这一挑战已在机器遗忘(MU)的概念下得到研究,其核心在于高效地从已训练模型中移除特定数据样本或客户端的影响。将这一概念延伸至联邦环境,催生了联邦遗忘(FUL)这一新兴研究领域,其关注在分布式异构环境中,从全局FL模型中消除单个客户端或数据子集的贡献。本综述首先介绍了FUL的基础知识。接着,我们回顾了为解决三大主要实施挑战——即通信成本、资源分配以及安全与隐私——而提出的FUL框架。此外,我们讨论了FUL在现代分布式计算机网络中的应用。我们也强调了当前面临的开放挑战与未来的研究机遇。通过整合现有知识并梳理开放问题,本综述旨在为寻求推进FL以构建可信赖、合规且以用户为中心的联邦系统的研究人员和从业者提供一个基础性参考。