Reinforcement learning with human feedback (RLHF) fine-tunes a pretrained large language model (LLM) using preference datasets, enabling the LLM to generate outputs that align with human preferences. Given the sensitive nature of these preference datasets held by various clients, there is a need to implement RLHF within a federated learning (FL) framework, where clients are reluctant to share their data due to privacy concerns. To address this, we introduce a feasible framework in which clients collaboratively train a binary selector with their preference datasets using our proposed FedBis. With a well-trained selector, we can further enhance the LLM that generates human-preferred completions. Meanwhile, we propose a novel algorithm, FedBiscuit, that trains multiple selectors by organizing clients into balanced and disjoint clusters based on their preferences. Compared to the FedBis, FedBiscuit demonstrates superior performance in simulating human preferences for pairwise completions. Our extensive experiments on federated human preference datasets -- marking the first benchmark to address heterogeneous data partitioning among clients -- demonstrate that FedBiscuit outperforms FedBis and even surpasses traditional centralized training.
翻译:基于人类反馈的强化学习(RLHF)利用偏好数据集对预训练的大型语言模型(LLM)进行微调,使LLM能够生成符合人类偏好的输出。鉴于不同客户端持有的偏好数据具有敏感性,需要在联邦学习(FL)框架内实施RLHF,因为客户出于隐私考虑通常不愿共享数据。为此,我们提出一个可行框架:客户端使用我们提出的FedBis方法,基于其偏好数据集协同训练一个二元选择器。通过训练良好的选择器,我们可以进一步优化LLM以生成更符合人类偏好的文本补全。同时,我们提出一种新颖算法FedBiscuit,该算法根据客户偏好将其组织为均衡且互斥的集群,从而训练多个选择器。与FedBis相比,FedBiscuit在模拟人类对成对补全的偏好方面表现出更优越的性能。我们在联邦人类偏好数据集上进行了大量实验——这是首个针对客户端间异构数据划分的基准测试——结果表明FedBiscuit不仅优于FedBis,甚至超越了传统的集中式训练方法。