Large language models (LLMs) have revolutionized the role of AI, yet pose potential social risks. To steer LLMs towards human preference, alignment technologies have been introduced and gained increasing attention. Nevertheless, existing methods heavily rely on high-quality positive-negative training pairs, suffering from noisy positive responses that are barely distinguishable from negative ones. Given recent LLMs' proficiency in generating helpful responses, this work pivots towards a new research question: can we achieve alignment using solely human-annotated negative samples, preserving helpfulness while reducing harmfulness? For this purpose, we propose Distributional Dispreference Optimization (D$^2$O), which maximizes the discrepancy between dispreferred responses and the generated non-negative ones. In this way, D$^2$O effectively eschews harmful information without incorporating noisy positive samples, while avoiding collapse using self-generated responses as anchors. We demonstrate that D$^2$O can be regarded as learning a distributional preference model reflecting human dispreference against negative responses, which is theoretically an upper bound of the instance-level DPO. Extensive experiments manifest that our method achieves comparable generation quality and surpasses the latest strong baselines in producing less harmful and more informative responses with better training stability and faster convergence.
翻译:大型语言模型(LLMs)已彻底改变了人工智能的角色,但也带来了潜在的社会风险。为使LLMs符合人类偏好,对齐技术被提出并日益受到关注。然而,现有方法严重依赖高质量的正负训练对,且受困于与负面响应难以区分的含噪声正面响应。鉴于当前LLMs在生成有益响应方面已具备较强能力,本研究转向一个新的研究问题:能否仅使用人类标注的负面样本实现对齐,在保持有益性的同时降低有害性?为此,我们提出分布性非偏好优化(D$^2$O),该方法最大化非偏好响应与生成的非负面响应之间的差异。通过这种方式,D$^2$O在避免引入含噪声正面样本的同时有效规避有害信息,并利用自生成响应作为锚点防止模型坍缩。我们证明D$^2$O可视为学习反映人类对负面响应非偏好的分布性偏好模型,理论上构成实例级DPO的上界。大量实验表明,本方法在保持相当生成质量的同时,能以更好的训练稳定性与更快的收敛速度,超越现有强基线模型生成更具信息量且更低有害性的响应。