The integration of visual understanding and generation into unified multimodal models represents a significant stride toward general-purpose AI. However, a fundamental question remains unanswered by existing benchmarks: does this architectural unification actually enable synergetic interaction between the constituent capabilities? Existing evaluation paradigms, which primarily assess understanding and generation in isolation, are insufficient for determining whether a unified model can leverage its understanding to enhance its generation, or use generative simulation to facilitate deeper comprehension. To address this critical gap, we introduce RealUnify, a benchmark specifically designed to evaluate bidirectional capability synergy. RealUnify comprises 1,000 meticulously human-annotated instances spanning 10 categories and 32 subtasks. It is structured around two core axes: 1) Understanding Enhances Generation, which requires reasoning (e.g., commonsense, logic) to guide image generation, and 2) Generation Enhances Understanding, which necessitates mental simulation or reconstruction (e.g., of transformed or disordered visual inputs) to solve reasoning tasks. A key contribution is our dual-evaluation protocol, which combines direct end-to-end assessment with a diagnostic stepwise evaluation that decomposes tasks into distinct understanding and generation phases. This protocol allows us to precisely discern whether performance bottlenecks stem from deficiencies in core abilities or from a failure to integrate them. Through large-scale evaluations of 12 leading unified models and 6 specialized baselines, we find that current unified models still struggle to achieve effective synergy, indicating that architectural unification alone is insufficient. These results highlight the need for new training strategies and inductive biases to fully unlock the potential of unified modeling.
翻译:将视觉理解与生成能力整合至统一的多模态模型中,标志着向通用人工智能迈出了重要一步。然而,现有基准测试未能回答一个根本性问题:这种架构上的统一是否真正实现了各构成能力之间的协同交互?现有的评估范式主要孤立地评估理解与生成能力,不足以判断统一模型能否利用其理解能力来增强生成效果,或通过生成式模拟来促进更深层次的理解。为填补这一关键空白,我们提出了RealUnify——一个专门用于评估双向能力协同的基准测试。RealUnify包含1000个经过人工精细标注的实例,涵盖10个类别和32个子任务。其设计围绕两个核心轴线展开:1) 理解增强生成,即需要推理(如常识、逻辑)来指导图像生成;2) 生成增强理解,即需要通过心理模拟或重建(如对变换或无序视觉输入的处理)来解决推理任务。一个关键贡献是我们的双重评估协议,该协议结合了直接的端到端评估与诊断性的分步评估——后者将任务分解为独立的理解阶段和生成阶段。此协议使我们能够精确识别性能瓶颈是源于核心能力的不足,还是源于能力整合的失败。通过对12个领先的统一模型和6个专业基线模型的大规模评估,我们发现当前统一模型在实现有效协同方面仍面临困难,这表明仅靠架构统一是不够的。这些结果凸显了需要新的训练策略和归纳偏置,以充分释放统一建模的潜力。