Reinforcement learning from human feedback (RLHF) has proven effective in enhancing the instruction-following capabilities of large language models; however, it remains underexplored in the cross-modality domain. As the number of modalities increases, aligning all-modality models with human intentions -- such as instruction following -- becomes a pressing challenge. In this work, we make the first attempt to fine-tune all-modality models (i.e. input and output with any modality, also named any-to-any models) using human preference data across all modalities (including text, image, audio, and video), ensuring its behavior aligns with human intentions. This endeavor presents several challenges. First, there is no large-scale all-modality human preference data in existing open-source resources, as most datasets are limited to specific modalities, predominantly text and image. Secondly, the effectiveness of binary preferences in RLHF for post-training alignment in complex all-modality scenarios remains an unexplored area. Finally, there is a lack of a systematic framework to evaluate the capabilities of all-modality models, particularly regarding modality selection and synergy. To address these challenges, we propose the align-anything framework, which includes meticulously annotated 200k all-modality human preference data. Then, we introduce an alignment method that learns from unified language feedback, effectively capturing complex modality-specific human preferences and enhancing the model's instruction-following capabilities. Furthermore, to assess performance improvements in all-modality models after post-training alignment, we construct a challenging all-modality capability evaluation framework -- eval-anything. All data, models, and code frameworks have been open-sourced for the community. For more details, please refer to https://github.com/PKU-Alignment/align-anything.
翻译:基于人类反馈的强化学习(RLHF)已被证明能有效提升大语言模型的指令遵循能力,但在跨模态领域仍探索不足。随着模态数量的增加,使全模态模型与人类意图(如指令遵循)对齐成为一个紧迫的挑战。在本工作中,我们首次尝试利用涵盖所有模态(包括文本、图像、音频和视频)的人类偏好数据对全模态模型(即输入和输出可为任意模态,亦称任意到任意模型)进行微调,确保其行为与人类意图保持一致。这一努力面临若干挑战。首先,现有开源资源中缺乏大规模的全模态人类偏好数据,因为大多数数据集仅限于特定模态,主要是文本和图像。其次,在复杂的全模态场景中,RLHF中的二元偏好对于训练后对齐的有效性仍是一个未探索的领域。最后,缺乏一个系统性的框架来评估全模态模型的能力,特别是在模态选择与协同方面。为应对这些挑战,我们提出了对齐万物(align-anything)框架,其中包含精心标注的20万条全模态人类偏好数据。接着,我们引入了一种从统一语言反馈中学习的对齐方法,该方法能有效捕捉复杂的模态特定人类偏好,并增强模型的指令遵循能力。此外,为评估全模态模型在训练后对齐后的性能提升,我们构建了一个具有挑战性的全模态能力评估框架——评估万物(eval-anything)。所有数据、模型和代码框架均已开源供社区使用。更多详情,请参阅 https://github.com/PKU-Alignment/align-anything。