Modeling wind-driven object dynamics from video observations is highly challenging due to the invisibility and spatio-temporal variability of wind, as well as the complex deformations of objects. We present DiffWind, a physics-informed differentiable framework that unifies wind-object interaction modeling, video-based reconstruction, and forward simulation. Specifically, we represent wind as a grid-based physical field and objects as particle systems derived from 3D Gaussian Splatting, with their interaction modeled by the Material Point Method (MPM). To recover wind-driven object dynamics, we introduce a reconstruction framework that jointly optimizes the spatio-temporal wind force field and object motion through differentiable rendering and simulation. To ensure physical validity, we incorporate the Lattice Boltzmann Method (LBM) as a physics-informed constraint, enforcing compliance with fluid dynamics laws. Beyond reconstruction, our method naturally supports forward simulation under novel wind conditions and enables new applications such as wind retargeting. We further introduce WD-Objects, a dataset of synthetic and real-world wind-driven scenes. Extensive experiments demonstrate that our method significantly outperforms prior dynamic scene modeling approaches in both reconstruction accuracy and simulation fidelity, opening a new avenue for video-based wind-object interaction modeling.
翻译:从视频观测中建模风驱物体动力学极具挑战性,原因在于风的不可见性与时空变异性,以及物体自身的复杂形变。我们提出了DiffWind,一个基于物理的可微分框架,它统一了风-物体交互建模、基于视频的重建以及前向模拟。具体而言,我们将风表示为基于网格的物理场,将物体表示为源自3D Gaussian Splatting的粒子系统,并通过物质点法(MPM)对它们的相互作用进行建模。为了恢复风驱物体动力学,我们引入了一个重建框架,该框架通过可微分渲染与模拟,联合优化时空风力场和物体运动。为确保物理有效性,我们引入了格子玻尔兹曼方法(LBM)作为物理约束,强制其符合流体动力学定律。除了重建,我们的方法天然支持在新风条件下的前向模拟,并实现了诸如风重定向等新应用。我们进一步引入了WD-Objects,一个包含合成与真实世界风驱场景的数据集。大量实验表明,我们的方法在重建精度和模拟保真度上均显著优于先前的动态场景建模方法,为基于视频的风-物体交互建模开辟了新途径。