As the capabilities of large language models (LLMs) have expanded dramatically, aligning these models with human values presents a significant challenge. Traditional alignment strategies rely heavily on human intervention, such as Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), or on the self-alignment capacities of LLMs, which usually require a strong LLM's emergent ability to improve its original bad answer. To address these challenges, we propose a novel self-alignment method that utilizes a Chain of Thought (CoT) approach, termed AlignCoT. This method encompasses stages of Question Analysis, Answer Guidance, and Safe Answer production. It is designed to enable LLMs to generate high-quality, safe responses throughout various stages of their development. Furthermore, we introduce the Mixture of insighTful Experts (MoTE) architecture, which applies mixture of experts to enhance each component of the AlignCoT process, markedly increasing alignment efficiency. The MoTE approach not only outperforms existing methods in aligning LLMs with human values but also highlights the benefits of using self-generated data, revealing the dual benefits of improved alignment and training efficiency.
翻译:随着大型语言模型(LLM)能力的显著扩展,如何使这些模型与人类价值观对齐成为一个重大挑战。传统的对齐策略严重依赖人工干预,例如监督微调(SFT)和基于人类反馈的强化学习(RLHF),或者依赖于LLM的自对齐能力,而后者通常需要一个强大LLM的涌现能力来改进其原本不佳的答案。为了应对这些挑战,我们提出了一种新颖的自对齐方法,该方法利用思维链(CoT)方法,称为AlignCoT。该方法包含问题分析、答案引导和安全答案生成三个阶段。其设计目标是使LLM在其发展的各个阶段都能生成高质量、安全的响应。此外,我们引入了Mixture of insighTful Experts(MoTE)架构,该架构应用专家混合技术来增强AlignCoT过程的每个组成部分,从而显著提高对齐效率。MoTE方法不仅在使LLM与人类价值观对齐方面优于现有方法,而且凸显了使用自生成数据的优势,揭示了改进对齐效果和提升训练效率的双重益处。