Realistic hair motion is crucial for high-quality avatars, but it is often limited by the computational resources available for real-time applications. To address this challenge, we propose a novel neural approach to predict physically plausible hair deformations that generalizes to various body poses, shapes, and hairstyles. Our model is trained using a self-supervised loss, eliminating the need for expensive data generation and storage. We demonstrate our method's effectiveness through numerous results across a wide range of pose and shape variations, showcasing its robust generalization capabilities and temporally smooth results. Our approach is highly suitable for real-time applications with an inference time of only a few milliseconds on consumer hardware and its ability to scale to predicting the drape of 1000 grooms in 0.3 seconds.
翻译:逼真的头发运动对于高质量虚拟形象至关重要,但实时应用中的计算资源往往限制了其实现。为应对这一挑战,我们提出了一种新颖的神经方法,用于预测物理上合理的头发形变,该方法能泛化至各种身体姿态、体型和发型。我们的模型使用自监督损失进行训练,无需昂贵的数据生成与存储。我们通过大量不同姿态和体型变化下的结果证明了本方法的有效性,展示了其强大的泛化能力和时间平滑性。我们的方法非常适合实时应用,在消费级硬件上推理时间仅需数毫秒,并能扩展至在0.3秒内预测1000个发型组的垂坠效果。