In this paper, we present a patch-based representation of surfaces, PolyFit, which is obtained by fitting jet functions locally on surface patches. Such a representation can be learned efficiently in a supervised fashion from both analytic functions and real data. Once learned, it can be generalized to various types of surfaces. Using PolyFit, the surfaces can be efficiently deformed by updating a compact set of jet coefficients rather than optimizing per-vertex degrees of freedom for many downstream tasks in computer vision and graphics. We demonstrate the capabilities of our proposed methodologies with two applications: 1) Shape-from-template (SfT): where the goal is to deform the input 3D template of an object as seen in image/video. Using PolyFit, we adopt test-time optimization that delivers competitive accuracy while being markedly faster than offline physics-based solvers, and outperforms recent physics-guided neural simulators in accuracy at modest additional runtime. 2) Garment draping. We train a self-supervised, mesh- and garment-agnostic model that generalizes across resolutions and garment types, delivering up to an order-of-magnitude faster inference than strong baselines.
翻译:本文提出一种基于面片的曲面表示方法——PolyFit,该方法通过在曲面面片上局部拟合jet函数获得。这种表示能够通过监督学习方式,从解析函数和真实数据中高效学习。一旦学习完成,该方法可推广至多种曲面类型。利用PolyFit,曲面可通过更新紧凑的jet系数集合实现高效变形,而无需为计算机视觉与图形学中的诸多下游任务优化每个顶点的自由度。我们通过两个应用展示所提方法的性能:1)基于模板的形状重建(SfT):其目标是根据图像/视频中观察到的物体形变来调整输入的三维模板。采用PolyFit后,我们通过测试时优化实现了具有竞争力的精度,其速度显著快于基于物理的离线求解器,并且在适度增加运行时间的情况下,其精度优于近期基于物理指导的神经模拟器。2)服装悬垂模拟。我们训练了一个自监督、网格无关且服装无关的模型,该模型能够泛化至不同分辨率和服装类型,其推理速度比强基线方法快一个数量级。