Modern large language model (LLM) alignment techniques rely on human feedback, but it is unclear whether the techniques fundamentally limit the capabilities of aligned LLMs. In particular, it is unclear whether it is possible to align (stronger) LLMs with superhuman capabilities with (weaker) human feedback without degrading their capabilities. This is an instance of the weak-to-strong generalization problem: using weaker (less capable) feedback to train a stronger (more capable) model. We prove that weak-to-strong generalization is possible by eliciting latent knowledge from pre-trained LLMs. In particular, we cast the weak-to-strong generalization problem as a transfer learning problem in which we wish to transfer a latent concept from a weak model to a strong pre-trained model. We prove that a naive fine-tuning approach suffers from fundamental limitations, but an alternative refinement-based approach suggested by the problem structure provably overcomes the limitations of fine-tuning. Finally, we demonstrate the practical applicability of the refinement approach with three LLM alignment tasks.
翻译:现代大型语言模型(LLM)的对齐技术依赖于人类反馈,但尚不清楚这些技术是否从根本上限制了已对齐LLM的能力。特别是,是否可能利用(较弱的)人类反馈来对齐具有超人类能力的(更强的)LLM而不降低其能力,这一点尚不明确。这是弱到强泛化问题的一个实例:使用较弱(能力较差)的反馈来训练较强(能力较强)的模型。我们证明,通过从预训练的LLM中提取潜在知识,弱到强泛化是可能的。具体而言,我们将弱到强泛化问题表述为一个迁移学习问题,其中我们希望将潜在概念从弱模型迁移到强的预训练模型。我们证明,一种朴素的微调方法存在根本性局限,但由问题结构所启示的一种基于精化的替代方法,可证明能够克服微调的局限性。最后,我们通过三个LLM对齐任务展示了精化方法的实际适用性。