Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models (LLMs) with human preferences, thereby enhancing the quality of responses generated. A critical component of RLHF is the reward model, which is trained on preference data and outputs a scalar reward during the inference stage. However, the collection of preference data still lacks thorough investigation. Recent studies indicate that preference data is collected either by AI or humans, where chosen and rejected instances are identified among pairwise responses. We question whether this process effectively filters out noise and ensures sufficient diversity in collected data. To address these concerns, for the first time, we propose a comprehensive framework for preference data collection, decomposing the process into four incremental steps: Prompt Generation, Response Generation, Response Filtering, and Human Labeling. This structured approach ensures the collection of high-quality preferences while reducing reliance on human labor. We conducted comprehensive experiments based on the data collected at different stages, demonstrating the effectiveness of the proposed data collection method.
翻译:基于人类反馈的强化学习(RLHF)通过使大型语言模型(LLMs)与人类偏好对齐,从而提升生成回答的质量。RLHF的一个关键组成部分是奖励模型,该模型在偏好数据上进行训练,并在推理阶段输出标量奖励。然而,偏好数据的收集过程仍缺乏深入研究。近期研究表明,偏好数据通常由AI或人类收集,其中在成对回答中识别出被选中和被拒绝的实例。我们质疑这一过程是否能有效过滤噪声并确保所收集数据具有足够的多样性。为解决这些问题,我们首次提出了一个全面的偏好数据收集框架,将整个过程分解为四个递进步骤:提示生成、回答生成、回答过滤和人工标注。这种结构化方法确保了高质量偏好数据的收集,同时减少了对人工劳动的依赖。我们基于不同阶段收集的数据进行了全面实验,证明了所提数据收集方法的有效性。