Recent advancements in predictive models have demonstrated exceptional capabilities in predicting the future state of objects and scenes. However, the lack of categorization based on inherent characteristics continues to hinder the progress of predictive model development. Additionally, existing benchmarks are unable to effectively evaluate higher-capability, highly embodied predictive models from an embodied perspective. In this work, we classify the functionalities of predictive models into a hierarchy and take the first step in evaluating World Simulators by proposing a dual evaluation framework called WorldSimBench. WorldSimBench includes Explicit Perceptual Evaluation and Implicit Manipulative Evaluation, encompassing human preference assessments from the visual perspective and action-level evaluations in embodied tasks, covering three representative embodied scenarios: Open-Ended Embodied Environment, Autonomous, Driving, and Robot Manipulation. In the Explicit Perceptual Evaluation, we introduce the HF-Embodied Dataset, a video assessment dataset based on fine-grained human feedback, which we use to train a Human Preference Evaluator that aligns with human perception and explicitly assesses the visual fidelity of World Simulators. In the Implicit Manipulative Evaluation, we assess the video-action consistency of World Simulators by evaluating whether the generated situation-aware video can be accurately translated into the correct control signals in dynamic environments. Our comprehensive evaluation offers key insights that can drive further innovation in video generation models, positioning World Simulators as a pivotal advancement toward embodied artificial intelligence.
翻译:近期预测模型的发展在预测物体与场景未来状态方面展现出卓越能力。然而,基于内在特性的分类体系缺失持续阻碍着预测模型开发的进程。此外,现有基准测试无法从具身视角有效评估更高能力、强具身化的预测模型。本研究将预测模型的功能划分为层次化体系,并通过提出名为WorldSimBench的双重评估框架,迈出了评估世界模拟器的第一步。WorldSimBench包含显式感知评估与隐式操控评估,涵盖视觉视角的人类偏好评估以及具身任务中的动作层级评估,覆盖三种代表性具身场景:开放式具身环境、自动驾驶与机器人操控。在显式感知评估中,我们引入了HF-Embodied数据集——基于细粒度人类反馈的视频评估数据集,并以此训练符合人类感知的人类偏好评估器,显式评估世界模拟器的视觉保真度。在隐式操控评估中,我们通过检验生成的情境感知视频能否在动态环境中准确转化为正确控制信号,来评估世界模拟器的视频-动作一致性。我们的综合评估提供了关键见解,可推动视频生成模型的进一步创新,将世界模拟器定位为实现具身人工智能的关键进展。