Advancements in Natural Language Processing (NLP), have led to the emergence of Large Language Models (LLMs) such as GPT, Llama, Claude, and Gemini, which excel across a range of tasks but require extensive fine-tuning to align their outputs with human expectations. A widely used method for achieving this alignment is Reinforcement Learning from Human Feedback (RLHF), which, despite its success, faces challenges in accurately modelling human preferences. In this paper, we introduce GazeReward, a novel framework that integrates implicit feedback -- and specifically eye-tracking (ET) data -- into the Reward Model (RM). In addition, we explore how ET-based features can provide insights into user preferences. Through ablation studies we test our framework with different integration methods, LLMs, and ET generator models, demonstrating that our approach significantly improves the accuracy of the RM on established human preference datasets. This work advances the ongoing discussion on optimizing AI alignment with human values, exploring the potential of cognitive data for shaping future NLP research.
翻译:自然语言处理(NLP)领域的进步催生了诸如GPT、Llama、Claude和Gemini等大型语言模型(LLMs),它们在多种任务上表现出色,但需要大量微调以使其输出与人类期望对齐。实现这种对齐的一种广泛使用的方法是基于人类反馈的强化学习(RLHF),尽管取得了成功,但该方法在精确建模人类偏好方面仍面临挑战。本文提出了GazeReward,一个将隐式反馈——特别是眼动追踪(ET)数据——整合到奖励模型(RM)中的新颖框架。此外,我们探讨了基于ET的特征如何能够提供关于用户偏好的洞见。通过消融研究,我们使用不同的集成方法、LLMs和ET生成器模型测试了我们的框架,结果表明我们的方法显著提高了RM在既定人类偏好数据集上的准确性。这项工作推进了关于优化AI与人类价值观对齐的持续讨论,探索了认知数据对未来NLP研究塑造的潜力。