Creating a high-fidelity, animatable 3D full-body avatar from a single image is a challenging task due to the diverse appearance and poses of humans and the limited availability of high-quality training data. To achieve fast and high-quality human reconstruction, this work rethinks the task from the perspectives of dataset, model, and representation. First, we introduce a large-scale HUman-centric GEnerated dataset, HuGe100K, consisting of 100K diverse, photorealistic sets of human images. Each set contains 24-view frames in specific human poses, generated using a pose-controllable image-to-multi-view model. Next, leveraging the diversity in views, poses, and appearances within HuGe100K, we develop a scalable feed-forward transformer model to predict a 3D human Gaussian representation in a uniform space from a given human image. This model is trained to disentangle human pose, body shape, clothing geometry, and texture. The estimated Gaussians can be animated without post-processing. We conduct comprehensive experiments to validate the effectiveness of the proposed dataset and method. Our model demonstrates the ability to efficiently reconstruct photorealistic humans at 1K resolution from a single input image using a single GPU instantly. Additionally, it seamlessly supports various applications, as well as shape and texture editing tasks.
翻译:从单张图像创建高保真、可动画化的三维全身化身是一项具有挑战性的任务,这源于人类外观和姿态的多样性以及高质量训练数据的有限性。为实现快速且高质量的人体重建,本研究从数据集、模型和表示方法三个角度重新审视了该任务。首先,我们引入了一个大规模以人为中心的生成数据集 HuGe100K,该数据集包含 10 万组多样化、逼真的人体图像集。每组图像包含特定人体姿态下的 24 个视角帧,这些图像通过一个姿态可控的图像到多视角模型生成。其次,利用 HuGe100K 中视角、姿态和外观的多样性,我们开发了一个可扩展的前馈 Transformer 模型,用于从给定的人体图像预测统一空间中的三维人体高斯表示。该模型经过训练,能够解耦人体姿态、体型、衣物几何和纹理。所估计的高斯表示无需后处理即可进行动画驱动。我们进行了全面的实验以验证所提数据集和方法的有效性。我们的模型展示了使用单张 GPU 即时从单张输入图像高效重建 1K 分辨率逼真人体的能力。此外,它还能无缝支持各种应用,以及形状和纹理编辑任务。