Large Multi-modality Models (LMMs) have made significant progress in visual understanding and generation, but they still face challenges in General Visual Editing, particularly in following complex instructions, preserving appearance consistency, and supporting flexible input formats. To address this gap, we introduce RISEBench, the first benchmark for evaluating Reasoning-Informed viSual Editing (RISE). RISEBench focuses on four key reasoning types: Temporal, Causal, Spatial, and Logical Reasoning. We curate high-quality test cases for each category and propose an evaluation framework that assesses Instruction Reasoning, Appearance Consistency, and Visual Plausibility with both human judges and an LMM-as-a-judge approach. Our experiments reveal that while GPT-4o-Native significantly outperforms other open-source and proprietary models, even this state-of-the-art system struggles with logical reasoning tasks, highlighting an area that remains underexplored. As an initial effort, RISEBench aims to provide foundational insights into reasoning-aware visual editing and to catalyze future research. Though still in its early stages, we are committed to continuously expanding and refining the benchmark to support more comprehensive, reliable, and scalable evaluations of next-generation multimodal systems. Our code and data will be released at https://github.com/PhoenixZ810/RISEBench.
翻译:大型多模态模型(LMMs)在视觉理解与生成方面已取得显著进展,但在通用视觉编辑任务中仍面临挑战,尤其是在遵循复杂指令、保持外观一致性以及支持灵活输入格式方面。为填补这一空白,我们提出了RISEBench——首个用于评估推理感知视觉编辑(RISE)的基准测试。RISEBench聚焦于四种关键推理类型:时序推理、因果推理、空间推理与逻辑推理。我们为每个类别精心构建了高质量测试用例,并提出一个评估框架,通过人工评判与LMM-as-a-judge方法,从指令推理、外观一致性和视觉合理性三个维度进行评估。实验结果表明,尽管GPT-4o-Native显著优于其他开源与专有模型,但即使是这一最先进的系统在逻辑推理任务上仍存在困难,这揭示了一个尚未充分探索的研究领域。作为初步尝试,RISEBench旨在为推理感知视觉编辑提供基础性见解,并推动未来研究。尽管该基准仍处于早期阶段,我们将持续扩展与完善其内容,以支持对下一代多模态系统进行更全面、可靠且可扩展的评估。相关代码与数据将在https://github.com/PhoenixZ810/RISEBench 发布。