As future superhuman models become increasingly complex, accurately supervising their behavior may exceed human capabilities. Recent works have demonstrated that in such scenarios, weak models can effectively supervise strong models, a phenomenon known as weak-to-strong generalization. However, we find that naive weak-to-strong generalization fails under distribution shifts, often leading to worse performance of the strong model than its weak supervisors. To address this, we propose RAVEN, a robust weak-to-strong generalization framework that dynamically learns the optimal combinations of weak models in addition to parameters of the strong model. We demonstrate the effectiveness of RAVEN on image classification, text classification, and preference alignment tasks. RAVEN outperforms alternative baselines by over 30% on out-of-distribution tasks while matching or surpassing existing methods on in-distribution tasks. Moreover, our results show that RAVEN assigns higher weights to more accurate weak models, demonstrating its ability to automatically identify trustworthy supervision.
翻译:随着未来超人类模型日益复杂,精确监督其行为可能超出人类能力范围。近期研究表明,在此类场景中,弱模型能有效监督强模型,这一现象被称为弱到强泛化。然而,我们发现原始弱到强泛化方法在分布偏移下会失效,常导致强模型性能弱于其监督者。为解决此问题,我们提出RAVEN框架——一种鲁棒的弱到强泛化方法,该框架在优化强模型参数的同时,动态学习弱模型的最佳组合。我们在图像分类、文本分类和偏好对齐任务上验证了RAVEN的有效性。在分布外任务中,RAVEN以超过30%的优势优于基线方法;在分布内任务中,其性能与现有方法相当或更优。此外,实验结果表明RAVEN能为更准确的弱模型分配更高权重,证明其具备自动识别可信监督源的能力。