Generating high-fidelity garment animations through traditional workflows, from modeling to rendering, is both tedious and expensive. These workflows often require repetitive steps in response to updates in character motion, rendering viewpoint changes, or appearance edits. Although recent neural rendering offers an efficient solution for computationally intensive processes, it struggles with rendering complex garment animations containing fine wrinkle details and realistic garment-and-body occlusions, while maintaining structural consistency across frames and dense view rendering. In this paper, we propose a novel approach to directly synthesize garment animations from body motion sequences without the need for an explicit garment proxy. Our approach infers garment dynamic features from body motion, providing a preliminary overview of garment structure. Simultaneously, we capture detailed features from synthesized reference images of the garment's front and back, generated by a pre-trained image model. These features are then used to construct a neural radiance field that renders the garment animation video. Additionally, our technique enables garment recoloring by decomposing its visual elements. We demonstrate the generalizability of our method across unseen body motions and camera views, ensuring detailed structural consistency. Furthermore, we showcase its applicability to color editing on both real and synthetic garment data. Compared to existing neural rendering techniques, our method exhibits qualitative and quantitative improvements in garment dynamics and wrinkle detail modeling. Code is available at \url{https://github.com/wrk226/GarmentAnimationNeRF}.
翻译:通过传统工作流程(从建模到渲染)生成高保真服装动画既繁琐又昂贵。这些工作流程通常需要针对角色运动更新、渲染视角变化或外观编辑进行重复性步骤。尽管近期神经渲染为计算密集型流程提供了高效解决方案,但在渲染包含精细褶皱细节和真实服装-身体遮挡的复杂服装动画时仍面临挑战,同时难以保持跨帧结构一致性和密集视角渲染。本文提出一种新颖方法,可直接从身体运动序列合成服装动画,无需显式服装代理。我们的方法从身体运动推断服装动态特征,提供服装结构的初步概览。同时,我们从预训练图像模型生成的服装正反面合成参考图像中捕获细节特征。这些特征随后用于构建神经辐射场以渲染服装动画视频。此外,我们的技术通过分解服装视觉元素实现了服装重着色功能。我们证明了该方法在未见过的身体运动和相机视角下的泛化能力,确保了详细的结构一致性。进一步地,我们展示了其在真实与合成服装数据上进行色彩编辑的适用性。与现有神经渲染技术相比,我们的方法在服装动力学和褶皱细节建模方面展现出定性与定量的改进。代码发布于 \url{https://github.com/wrk226/GarmentAnimationNeRF}。