This paper addresses the challenges of aligning large language models (LLMs) with human values via preference learning (PL), focusing on incomplete and corrupted data in preference datasets. We propose a novel method for robustly and completely recalibrating values within these datasets to enhance LLMs' resilience against the issues. In particular, we devise a guaranteed polynomial time ranking algorithm that robustifies several existing models, such as the classic Bradley-Terry-Luce (BTL) (Bradley and Terry, 1952) model and certain generalizations of it. To the best of our knowledge, our present work is the first to propose an algorithm that provably recovers an $\epsilon$-optimal ranking with high probability while allowing as large as $O(n)$ perturbed pairwise comparison results per model response. Furthermore, we show robust recovery results in the partially observed setting. Our experiments confirm that our algorithms handle adversarial noise and unobserved comparisons well in both general and LLM preference dataset settings. This work contributes to the development and scaling of more reliable and ethically aligned AI models by equipping the dataset curation pipeline with the ability to handle missing and maliciously manipulated inputs.
翻译:本文探讨了通过偏好学习(PL)将大型语言模型(LLMs)与人类价值观对齐所面临的挑战,重点关注偏好数据集中存在的不完整和损坏数据问题。我们提出了一种新颖的方法,用于鲁棒且完整地重新校准这些数据集中的价值取向,以增强LLMs应对此类问题的能力。具体而言,我们设计了一种具有保证多项式时间复杂度的排序算法,该算法能够鲁棒化多种现有模型,例如经典的Bradley-Terry-Luce(BTL)模型及其某些推广形式。据我们所知,本研究首次提出了一种算法,能够在每个模型响应允许高达$O(n)$个扰动成对比较结果的情况下,以高概率可证明地恢复$\epsilon$最优排序。此外,我们展示了在部分观测设置下的鲁棒恢复结果。实验证实,我们的算法在通用场景和LLM偏好数据集场景中均能有效处理对抗性噪声和未观测比较。这项工作通过为数据集构建流程提供处理缺失和恶意篡改输入的能力,为开发更具可靠性且符合伦理对齐的人工智能模型及其规模化应用做出了贡献。