This paper addresses the cost-efficiency aspect of Reinforcement Learning from Human Feedback (RLHF). RLHF leverages datasets of human preferences over outputs of large language models (LLM) to instill human expectations into LLMs. While preference annotation comes with a monetized cost, the economic utility of a preference dataset has not been considered by far. What exacerbates this situation is that given complex intransitive or cyclic relationships in preference datasets, existing algorithms for fine-tuning LLMs are still far from capturing comprehensive preferences. This raises severe cost-efficiency concerns in production environments, where preference data accumulate over time. In this paper, we see the fine-tuning of LLMs as a monetized economy and introduce an auction mechanism to improve the efficiency of the preference data collection in dollar terms. We show that introducing an auction mechanism can play an essential role in enhancing the cost-efficiency of RLHF while maintaining satisfactory model performance. Experimental results demonstrate that our proposed auction-based protocol is cost-efficient for fine-tuning LLMs by concentrating on high-quality feedback.
翻译:本文针对基于人类反馈的强化学习(RLHF)的成本效益问题展开研究。RLHF通过利用人类对大型语言模型(LLM)输出结果的偏好数据集,将人类期望融入LLM中。尽管偏好标注会产生货币化成本,但迄今为止偏好数据集的经济效用尚未得到充分考虑。更严峻的是,鉴于偏好数据集中存在复杂的非传递性或循环关系,现有的LLM微调算法仍远未能够全面捕捉人类偏好。这在生产环境中引发了严重的成本效益担忧,因为偏好数据会随时间不断累积。本文将LLM的微调过程视为货币化经济体系,引入拍卖机制以提升偏好数据收集的货币效率。我们证明,引入拍卖机制在保持模型性能令人满意的同时,对提升RLHF的成本效益具有关键作用。实验结果表明,我们提出的基于拍卖的协议通过聚焦高质量反馈,能够以成本高效的方式实现LLM的微调。