Recent advances in human preference alignment have significantly enhanced multimodal generation and understanding. A key approach is training reward models to guide preference optimization. However, existing models are often task-specific, limiting their adaptability across diverse visual applications. We also argue that jointly learning to assess multiple tasks may foster a synergistic effect, where improved image understanding enhances image generation assessment, and refined image evaluation benefits video assessment through better frame analysis. To this end, this paper proposes UnifiedReward, the first unified reward model for multimodal understanding and generation assessment, enabling both pairwise ranking and pointwise scoring, which can be employed for vision model preference alignment. Specifically, (1) we first develop UnifiedReward on our constructed large-scale human preference dataset, including both image and video generation/understanding tasks. (2) Then, it is utilized to automatically construct high-quality preference pair data based on the vision models, fine-gradually filtering their outputs through pair ranking and point sifting. (3) Finally, these data are used for their preference alignment through Direct Preference Optimization (DPO). Experimental results demonstrate that joint learning to assess diverse visual tasks can lead to substantial mutual benefits and we apply our pipeline to both image and video understanding/generation tasks, significantly improving the performance in each domain.
翻译:人类偏好对齐的最新进展显著提升了多模态生成与理解能力。其中关键方法是通过训练奖励模型来指导偏好优化。然而,现有模型通常针对特定任务设计,限制了其在多样化视觉应用中的适应性。我们认为,联合学习评估多项任务可能产生协同效应:改进的图像理解能力可提升图像生成评估质量,而优化的图像评估能力则可通过更精准的帧分析来增强视频评估效果。为此,本文提出首个面向多模态理解与生成评估的统一奖励模型UnifiedReward,该模型同时支持成对排序与逐点评分功能,可用于视觉模型的偏好对齐。具体而言:(1)我们首先基于构建的大规模人类偏好数据集(包含图像与视频的生成/理解任务)开发UnifiedReward模型;(2)随后利用该模型基于视觉模型自动构建高质量偏好对数据,通过成对排序与逐点筛选机制对模型输出进行细粒度过滤;(3)最终通过直接偏好优化(DPO)方法实现视觉模型的偏好对齐。实验结果表明,联合学习评估多样化视觉任务能产生显著的相互促进效应。我们将该流程应用于图像与视频的理解/生成任务,在各自领域均实现了性能的显著提升。