Aligning large language models (LLMs) with human preferences is crucial for enhancing their utility in terms of helpfulness, truthfulness, safety, harmlessness, and interestingness. Existing methods for achieving this alignment often involves employing reinforcement learning from human feedback (RLHF) to fine-tune LLMs based on human labels assessing the relative quality of model responses. Nevertheless, RLHF is susceptible to instability during fine-tuning and presents challenges in implementation.Drawing inspiration from the emerging field of representation engineering (RepE), this study aims to identify relevant representations for high-level human preferences embedded in patterns of activity within an LLM, and achieve precise control of model behavior by transforming its representations. This novel approach, denoted as Representation Alignment from Human Feedback (RAHF), proves to be effective, computationally efficient, and easy to implement.Extensive experiments demonstrate the efficacy of RAHF in not only capturing but also manipulating representations to align with a broad spectrum of human preferences or values, rather than being confined to a singular concept or function (e.g. honesty or bias). RAHF's versatility in accommodating diverse human preferences shows its potential for advancing LLM performance.
翻译:将大语言模型与人类偏好对齐,对于提升其在有用性、真实性、安全性、无害性和趣味性方面的效用至关重要。实现这种对齐的现有方法通常涉及采用基于人类反馈的强化学习,根据人类评估模型响应相对质量的标注来微调大语言模型。然而,RLHF在微调过程中容易不稳定,并且在实施上面临挑战。受新兴领域表征工程的启发,本研究旨在识别大语言模型活动模式中蕴含的、与高层次人类偏好相关的表征,并通过转换其表征来实现对模型行为的精确控制。这种新颖的方法,称为基于人类反馈的表征对齐,被证明是有效的、计算高效的且易于实现的。广泛的实验证明了RAHF不仅在捕获表征方面有效,而且在操纵表征以与广泛的人类偏好或价值观(而非局限于单一概念或功能,如诚实或偏见)对齐方面同样有效。RAHF在适应多样化人类偏好方面的灵活性,显示了其在提升大语言模型性能方面的潜力。