The image-to-image translation abilities of generative learning models have recently made significant progress in the estimation of complex (steered) mappings between image distributions. While appearance based tasks like image in-painting or style transfer have been studied at length, we propose to investigate the potential of generative models in the context of physical simulations. Providing a dataset of 300k image-pairs and baseline evaluations for three different physical simulation tasks, we propose a benchmark to investigate the following research questions: i) are generative models able to learn complex physical relations from input-output image pairs? ii) what speedups can be achieved by replacing differential equation based simulations? While baseline evaluations of different current models show the potential for high speedups (ii), these results also show strong limitations toward the physical correctness (i). This underlines the need for new methods to enforce physical correctness. Data, baseline models and evaluation code http://www.physics-gen.org.
翻译:生成学习模型的图像到图像转换能力最近在估计图像分布间复杂(受控)映射方面取得了显著进展。尽管基于外观的任务(如图像修复或风格迁移)已被深入研究,我们提出在物理模拟背景下探索生成模型的潜力。通过提供包含30万张图像对的数据集及三项不同物理模拟任务的基线评估,我们提出一个基准来研究以下问题:i) 生成模型能否从输入-输出图像对中学习复杂物理关系?ii) 替代基于微分方程的模拟能实现何种加速?当前不同模型的基线评估结果虽显示出实现高速加速的潜力(ii),但也揭示了在物理正确性(i)方面的严重局限。这凸显了需要新方法来保证物理正确性的迫切需求。数据、基线模型与评估代码详见 http://www.physics-gen.org。