We present learning-based implicit shape representations designed for real-time avatar collision queries arising in the simulation of clothing. Signed distance functions (SDFs) have been used for such queries for many years due to their computational efficiency. Recently deep neural networks have been used for implicit shape representations (DeepSDFs) due to their ability to represent multiple shapes with modest memory requirements compared to traditional representations over dense grids. However, the computational expense of DeepSDFs prevents their use in real-time clothing simulation applications. We design a learning-based representation of SDFs for human avatars whoes bodies change shape kinematically due to joint-based skinning. Rather than using a single DeepSDF for the entire avatar, we use a collection of extremely computationally efficient (shallow) neural networks that represent localized deformations arising from changes in body shape induced by the variation of a single joint. This requires a stitching process to combine each shallow SDF in the collection together into one SDF representing the signed closest distance to the boundary of the entire body. To achieve this we augment each shallow SDF with an additional output that resolves whether or not the individual shallow SDF value is referring to a closest point on the boundary of the body, or to a point on the interior of the body (but on the boundary of the individual shallow SDF). Our model is extremely fast and accurate and we demonstrate its applicability with real-time simulation of garments driven by animated characters.
翻译:我们提出了一种基于学习的隐式形状表示方法,专为服装仿真中出现的实时虚拟化身碰撞查询而设计。符号距离函数(SDFs)因其计算效率高,多年来一直被用于此类查询。最近,深度神经网络被用于隐式形状表示(DeepSDFs),因为与传统基于密集网格的表示相比,它们能够以适中的内存需求表示多种形状。然而,DeepSDFs的计算开销阻碍了其在实时服装仿真应用中的使用。我们为人体虚拟化身设计了一种基于学习的SDF表示方法,这些化身的身体形状会因基于关节的蒙皮而发生运动学变化。我们并非对整个虚拟化身使用单一的DeepSDF,而是采用一组计算效率极高(浅层)的神经网络,用于表示由单个关节变化引起的身体形状变化所产生的局部变形。这需要一个缝合过程,将集合中的每个浅层SDF组合成一个SDF,用以表示到整个身体边界的有符号最近距离。为此,我们为每个浅层SDF增加了一个额外的输出,用于判断单个浅层SDF值是指向身体边界上的最近点,还是指向身体内部(但在单个浅层SDF边界上)的点。我们的模型速度极快且精度高,并通过由动画角色驱动的服装实时仿真展示了其适用性。