With the rapid development of large language models (LLMs), aligning LLMs with human values and societal norms to ensure their reliability and safety has become crucial. Reinforcement learning with human feedback (RLHF) and Constitutional AI (CAI) have been proposed for LLM alignment. However, these methods require either heavy human annotations or explicitly pre-defined constitutions, which are labor-intensive and resource-consuming. To overcome these drawbacks, we study constitution-based LLM alignment and propose a data-driven constitution discovery and self-alignment framework called IterAlign. IterAlign leverages red teaming to unveil the weaknesses of an LLM and automatically discovers new constitutions using a stronger LLM. These constitutions are then used to guide self-correction of the base LLM. Such a constitution discovery pipeline can be run iteratively and automatically to discover new constitutions that specifically target the alignment gaps in the current LLM. Empirical results on several safety benchmark datasets and multiple base LLMs show that IterAlign successfully improves truthfulness, helpfulness, harmlessness and honesty, improving the LLM alignment by up to $13.5\%$ in harmlessness.
翻译:摘要:随着大型语言模型(LLMs)的快速发展,使其与人类价值观和社会规范对齐以确保其可靠性和安全性变得至关重要。基于人类反馈的强化学习(RLHF)和宪法人工智能(CAI)已被提出用于LLM对齐。然而,这些方法需要大量的人工标注或明确预定义的宪法,这既耗费人力又耗费资源。为克服这些缺点,我们研究基于宪法的LLM对齐,并提出了一种名为IterAlign的数据驱动宪法发现与自对齐框架。IterAlign利用红队测试揭示LLM的弱点,并使用更强的LLM自动发现新宪法。这些宪法随后用于指导基础LLM的自我修正。这种宪法发现流程可以迭代且自动运行,以发现专门针对当前LLM对齐差距的新宪法。在多个安全基准数据集和多种基础LLM上的实证结果表明,IterAlign成功提升了真实性、有用性、无害性和诚实性,将LLM对齐的无害性指标提升了高达13.5%。