Modern large language model (LLM) alignment techniques rely on human feedback, but it is unclear whether these techniques fundamentally limit the capabilities of aligned LLMs. In particular, it is unknown if it is possible to align (stronger) LLMs with superhuman capabilities with (weaker) human feedback without degrading their capabilities. This is an instance of the weak-to-strong generalization problem: using feedback from a weaker (less capable) model to train a stronger (more capable) model. We prove that weak-to-strong generalization is possible by eliciting latent knowledge from pre-trained LLMs. In particular, we cast the weak-to-strong generalization problem as a transfer learning problem in which we wish to transfer a latent concept prior from a weak model to a strong pre-trained model. We prove that a naive fine-tuning approach suffers from fundamental limitations, but an alternative refinement-based approach suggested by the problem structure provably overcomes the limitations of fine-tuning. Finally, we demonstrate the practical applicability of the refinement approach in multiple LLM alignment tasks.
翻译:现代大规模语言模型(LLM)的对齐技术依赖于人类反馈,但尚不清楚这些技术是否从根本上限制了已对齐LLM的能力。尤其未知的是,是否可能在不降低其能力的前提下,利用(较弱的)人类反馈来对齐具有超人类能力的(更强)LLM。这是弱监督到强泛化问题的一个实例:利用来自较弱(能力较低)模型的反馈来训练较强(能力较高)的模型。我们通过从预训练LLM中提取潜在知识,证明了弱监督到强泛化是可行的。具体而言,我们将弱监督到强泛化问题构建为一个迁移学习问题,旨在将潜在概念先验从弱模型迁移到强预训练模型。我们证明朴素的微调方法存在根本性局限,而由问题结构启发的另一种基于精炼的方法在理论上能够克服微调的局限性。最后,我们在多个LLM对齐任务中验证了精炼方法的实际适用性。