Garment simulation is fundamental to various applications in computer vision and graphics, from virtual try-on to digital human modelling. However, conventional physics-based methods remain computationally expensive, hindering their application in time-sensitive scenarios. While graph neural networks (GNNs) offer promising acceleration, existing approaches exhibit poor cross-resolution generalisation, demonstrating significant performance degradation on higher-resolution meshes beyond the training distribution. This stems from two key factors: (1) existing GNNs employ fixed message-passing depth that fails to adapt information aggregation to mesh density variation, and (2) vertex-wise displacement magnitudes are inherently resolution-dependent in garment simulation. To address these issues, we introduce Propagation-before-Update Graph Network (Pb4U-GNet), a resolution-adaptive framework that decouples message propagation from feature updates. Pb4U-GNet incorporates two key mechanisms: (1) dynamic propagation depth control, adjusting message-passing iterations based on mesh resolution, and (2) geometry-aware update scaling, which scales predictions according to local mesh characteristics. Extensive experiments show that even trained solely on low-resolution meshes, Pb4U-GNet exhibits strong generalisability across diverse mesh resolutions, addressing a fundamental challenge in neural garment simulation.
翻译:服装仿真是计算机视觉与图形学中诸多应用的基础,从虚拟试衣到数字人体建模均不可或缺。然而,传统的基于物理的方法计算成本高昂,限制了其在时间敏感场景中的应用。虽然图神经网络(GNNs)为实现加速提供了可能,但现有方法在跨分辨率泛化方面表现不佳,在处理超出训练分布的高分辨率网格时性能显著下降。这主要源于两个关键因素:(1)现有GNN采用固定的消息传递深度,无法根据网格密度变化自适应调整信息聚合;(2)在服装仿真中,顶点位移幅度本质上与分辨率相关。为解决这些问题,我们提出了传播-更新解耦图网络(Pb4U-GNet),这是一个将消息传播与特征更新解耦的分辨率自适应框架。Pb4U-GNet包含两个关键机制:(1)动态传播深度控制,根据网格分辨率调整消息传递迭代次数;(2)几何感知更新缩放,依据局部网格特征对预测结果进行缩放。大量实验表明,即使仅使用低分辨率网格进行训练,Pb4U-GNet也能在多种网格分辨率上展现出强大的泛化能力,从而解决了神经服装仿真中的一个根本性挑战。