This paper addresses the cost-efficiency aspect of Reinforcement Learning from Human Feedback (RLHF). RLHF leverages datasets of human preferences over outputs of large language models (LLM)s to instill human expectations into LLMs. Although preference annotation comes with a monetized cost, the economic utility of a preference dataset has not been considered by far. What exacerbates this situation is that, given complex intransitive or cyclic relationships in preference datasets, existing algorithms for fine-tuning LLMs are still far from capturing comprehensive preferences. This raises severe cost-efficiency concerns in production environments, where preference data accumulate over time. In this paper, we discuss the fine-tuning of LLMs as a monetized economy and introduce an auction mechanism to improve the efficiency of preference data collection in dollar terms. We show that introducing an auction mechanism can play an essential role in enhancing the cost-efficiency of RLHF, while maintaining satisfactory model performance. Experimental results demonstrate that our proposed auction-based protocol is cost-effective for fine-tuning LLMs concentrating on high-quality feedback.
翻译:本文针对基于人类反馈的强化学习(RLHF)的成本效益问题展开研究。RLHF利用人类对大型语言模型(LLM)输出结果的偏好数据集,将人类期望注入LLM中。尽管偏好标注会产生货币化成本,但迄今为止偏好数据集的经济效用尚未得到充分考虑。更严峻的是,鉴于偏好数据集中存在复杂的非传递性或循环关系,现有的LLM微调算法仍远未达到捕捉全面偏好的程度。这在生产环境中引发了严重的成本效益担忧,因为偏好数据会随时间不断累积。本文将LLM微调视为一种货币化经济体系,并引入拍卖机制以提高偏好数据收集的货币效率。研究表明,引入拍卖机制能在保持模型性能令人满意的同时,对提升RLHF的成本效益起到关键作用。实验结果表明,我们提出的基于拍卖的协议对于专注于高质量反馈的LLM微调具有成本效益。