Advancements in Natural Language Processing (NLP), have led to the emergence of Large Language Models (LLMs) such as GPT, Llama, Claude, and Gemini, which excel across a range of tasks but require extensive fine-tuning to align their outputs with human expectations. A widely used method for achieving this alignment is Reinforcement Learning from Human Feedback (RLHF), which, despite its success, faces challenges in accurately modelling human preferences. In this paper, we introduce GazeReward, a novel framework that integrates implicit feedback -- and specifically eye-tracking (ET) data -- into the Reward Model (RM). In addition, we explore how ET-based features can provide insights into user preferences. Through ablation studies we test our framework with different integration methods, LLMs, and ET generator models, demonstrating that our approach significantly improves the accuracy of the RM on established human preference datasets. This work advances the ongoing discussion on optimizing AI alignment with human values, exploring the potential of cognitive data for shaping future NLP research.
翻译:自然语言处理(NLP)领域的进步催生了GPT、Llama、Claude和Gemini等大语言模型(LLMs),这些模型在多项任务中表现出色,但需要大量微调以使其输出与人类期望对齐。实现这种对齐的一种广泛使用的方法是基于人类反馈的强化学习(RLHF),尽管该方法取得了成功,但在准确建模人类偏好方面仍面临挑战。本文提出GazeReward,一种将隐式反馈——特别是眼动追踪(ET)数据——整合到奖励模型(RM)中的新型框架。此外,我们探讨了基于ET的特征如何能够提供对用户偏好的洞察。通过消融研究,我们使用不同的整合方法、LLMs和ET生成器模型测试了该框架,结果表明我们的方法在已建立的人类偏好数据集上显著提高了RM的准确性。这项工作推进了关于优化AI与人类价值观对齐的持续讨论,探索了认知数据对未来NLP研究塑造的潜力。