We present a feed-forward framework for Gaussian full-head synthesis from a single unposed image. Unlike previous work that relies on time-consuming GAN inversion and test-time optimization, our framework can reconstruct the Gaussian full-head model given a single unposed image in a single forward pass. This enables fast reconstruction and rendering during inference. To mitigate the lack of large-scale 3D head assets, we propose a large-scale synthetic dataset from trained 3D GANs and train our framework using only synthetic data. For efficient high-fidelity generation, we introduce a coarse-to-fine Gaussian head generation pipeline, where sparse points from the FLAME model interact with the image features by transformer blocks for feature extraction and coarse shape reconstruction, which are then densified for high-fidelity reconstruction. To fully leverage the prior knowledge residing in pretrained 3D GANs for effective reconstruction, we propose a dual-branch framework that effectively aggregates the structured spherical triplane feature and unstructured point-based features for more effective Gaussian head reconstruction. Experimental results show the effectiveness of our framework towards existing work. Project page at: https://panolam.github.io/.
翻译:我们提出了一种从前馈框架,用于从单张无姿态图像合成高斯全头模型。与以往依赖耗时的GAN反演和测试时优化的方法不同,我们的框架能够在单次前向传播中,根据单张无姿态图像重建高斯全头模型。这实现了推理过程中的快速重建与渲染。为缓解大规模3D头部数据资产的缺乏,我们提出利用已训练的3D GAN生成大规模合成数据集,并仅使用合成数据训练我们的框架。为实现高效的高保真生成,我们引入了由粗到精的高斯头部生成流程:首先通过Transformer模块使FLAME模型的稀疏点与图像特征交互,进行特征提取和粗略形状重建,随后通过点云增密实现高保真重建。为充分利用预训练3D GAN中蕴含的先验知识以实现有效重建,我们提出了双分支框架,能有效聚合结构化的球面三平面特征与非结构化的基于点的特征,从而实现更高效的高斯头部重建。实验结果表明我们的框架相较于现有工作具有显著优势。项目页面位于:https://panolam.github.io/。