We present ALT (ALignment with Textual feedback), an approach that aligns language models with user preferences expressed in text. We argue that text offers greater expressiveness, enabling users to provide richer feedback than simple comparative preferences and this richer feedback can lead to more efficient and effective alignment. ALT aligns the model by conditioning its generation on the textual feedback. Our method relies solely on language modeling techniques and requires minimal hyper-parameter tuning, though it still presents the main benefits of RL-based alignment algorithms and can effectively learn from textual feedback. We explore the efficacy and efficiency of textual feedback across different tasks such as toxicity reduction, summarization, and dialog response generation. We find that ALT outperforms PPO for the task of toxicity reduction while being able to match its performance on summarization with only 20% of the samples. We also explore how ALT can be used with feedback provided by an existing LLM where we explore an LLM providing constrained and unconstrained textual feedback. We also outline future directions to align models with natural language feedback.
翻译:我们提出ALT(基于文本反馈的对齐方法),一种将语言模型与用户以文本形式表达的偏好进行对齐的方法。我们认为,文本提供了更强的表达能力,使用户能够提供比简单的比较性偏好更丰富的反馈,而这种更丰富的反馈可以带来更高效、更有效的对齐。ALT通过使模型的生成过程以文本反馈为条件来实现对齐。我们的方法仅依赖于语言建模技术,并且需要最少的超参数调整,尽管它仍然具备基于强化学习的对齐算法的主要优势,并且能够有效地从文本反馈中学习。我们探索了文本反馈在不同任务(如毒性内容减少、文本摘要和对话响应生成)中的有效性和效率。我们发现,在毒性内容减少任务上,ALT的表现优于PPO,而在文本摘要任务上,仅使用20%的样本就能达到与PPO相当的性能。我们还探索了如何将ALT与现有大型语言模型提供的反馈结合使用,其中我们研究了大型语言模型提供有约束和无约束文本反馈的情况。我们还概述了未来利用自然语言反馈进行模型对齐的研究方向。