Evaluating whether Multimodal Large Language Models (MLLMs) genuinely reason about physical dynamics remains challenging. Most existing benchmarks rely on recognition-style protocols such as Visual Question Answering (VQA) and Violation of Expectation (VoE), which can often be answered without committing to an explicit, testable physical hypothesis. We propose VisPhyWorld, an execution-based framework that evaluates physical reasoning by requiring models to generate executable simulator code from visual observations. By producing runnable code, the inferred world representation is directly inspectable, editable, and falsifiable. This separates physical reasoning from rendering. Building on this framework, we introduce VisPhyBench, comprising 209 evaluation scenes derived from 108 physical templates and a systematic protocol that evaluates how well models reconstruct appearance and reproduce physically plausible motion. Our pipeline produces valid reconstructed videos in 97.7% on the benchmark. Experiments show that while state-of-the-art MLLMs achieve strong semantic scene understanding, they struggle to accurately infer physical parameters and to simulate consistent physical dynamics.
翻译:评估多模态大语言模型(MLLMs)是否真正理解物理动态仍然具有挑战性。现有的大多数基准测试依赖于识别式协议,如视觉问答(VQA)和期望违背(VoE),这些协议通常可以在无需提出明确、可检验的物理假设的情况下作答。我们提出了VisPhyWorld,一个基于执行的框架,通过要求模型根据视觉观察生成可执行的模拟器代码来评估物理推理能力。通过生成可运行代码,推断出的世界表征可直接被检查、编辑和证伪。这实现了物理推理与渲染的分离。基于此框架,我们提出了VisPhyBench,它包含源自108个物理模板的209个评估场景,以及一个系统化协议,用于评估模型重建外观和复现物理合理运动的能力。我们的流程在基准测试中实现了97.7%的有效视频重建率。实验表明,尽管最先进的多模态大语言模型在语义场景理解方面表现优异,但在准确推断物理参数以及模拟一致的物理动态方面仍存在困难。