As for human avatar reconstruction, contemporary techniques commonly necessitate the acquisition of costly data and struggle to achieve satisfactory results from a small number of casual images. In this paper, we investigate this task from a few-shot unconstrained photo album. The reconstruction of human avatars from such data sources is challenging because of limited data amount and dynamic articulated poses. For handling dynamic data, we integrate a skinning mechanism with deep marching tetrahedra (DMTet) to form a drivable tetrahedral representation, which drives arbitrary mesh topologies generated by the DMTet for the adaptation of unconstrained images. To effectively mine instructive information from few-shot data, we devise a two-phase optimization method with few-shot reference and few-shot guidance. The former focuses on aligning avatar identity with reference images, while the latter aims to generate plausible appearances for unseen regions. Overall, our framework, called HaveFun, can undertake avatar reconstruction, rendering, and animation. Extensive experiments on our developed benchmarks demonstrate that HaveFun exhibits substantially superior performance in reconstructing the human body and hand. Project website: https://seanchenxy.github.io/HaveFunWeb/.
翻译:针对人体虚拟化身重建,现有技术通常需要获取昂贵的数据,且难以从少量随意拍摄的图像中获得令人满意的结果。本文从少量未约束的照片集出发探索该任务。由于数据量有限且存在动态关节姿态,从此类数据源重建人体虚拟化身具有挑战性。为处理动态数据,我们将蒙皮机制与深度行进四面体(DMTet)相结合,形成可驱动的四面体表示,该表示能够驱动DMTet生成的任意网格拓扑以适应未约束图像。为从少量数据中有效挖掘指导性信息,我们设计了一种包含少量参考与少量引导的两阶段优化方法:前者专注于对齐虚拟化身身份与参考图像,后者旨在为不可见区域生成合理外观。总体而言,我们的框架(名为HaveFun)可完成虚拟化身重建、渲染与动画任务。在我们开发的基准测试上进行的大量实验表明,HaveFun在人体与手部重建方面展现出显著更优的性能。项目网站:https://seanchenxy.github.io/HaveFunWeb/。