Recent advances in digital avatar technology have enabled the generation of compelling virtual characters, but deploying these avatars on compute-constrained devices poses significant challenges for achieving realistic garment deformations. While physics-based simulations yield accurate results, they are computationally prohibitive for real-time applications. Conversely, linear blend skinning offers efficiency but fails to capture the complex dynamics of loose-fitting garments, resulting in unrealistic motion and visual artifacts. Neural methods have shown promise, yet they struggle to animate loose clothing plausibly under strict performance constraints. In this work, we present a novel approach for fast and physically plausible garment draping tailored for resource-constrained environments. Our method leverages a reduced-space quasi-static neural simulation, mapping the garment's full degrees of freedom to a set of bone handles that drive deformation. A neural deformation model is trained in a fully self-supervised manner, eliminating the need for costly simulation data. At runtime, a lightweight neural network modulates the handle deformations based on body shape and pose, enabling realistic garment behavior that respects physical properties such as gravity, fabric stretching, bending, and collision avoidance. Experimental results demonstrate that our method achieves physically plausible garment drapes while generalizing across diverse poses and body shapes, supporting zero-shot evaluation and mesh topology independence. Our method's runtime significantly outperforms past works, as it runs in microseconds per frame using single-threaded CPU inference, offering a practical solution for real-time avatar animation on low-compute devices.
翻译:暂无翻译