In the animation industry, 3D modelers typically rely on front and back non-overlapped concept designs to guide the 3D modeling of anime characters. However, there is currently a lack of automated approaches for generating anime characters directly from these 2D designs. In light of this, we explore a novel task of reconstructing anime characters from non-overlapped views. This presents two main challenges: existing multi-view approaches cannot be directly applied due to the absence of overlapping regions, and there is a scarcity of full-body anime character data and standard benchmarks. To bridge the gap, we present Non-Overlapped Views for 3D \textbf{A}nime Character Reconstruction (NOVA-3D), a new framework that implements a method for view-aware feature fusion to learn 3D-consistent features effectively and synthesizes full-body anime characters from non-overlapped front and back views directly. To facilitate this line of research, we collected the NOVA-Human dataset, which comprises multi-view images and accurate camera parameters for 3D anime characters. Extensive experiments demonstrate that the proposed method outperforms baseline approaches, achieving superior reconstruction of anime characters with exceptional detail fidelity. In addition, to further verify the effectiveness of our method, we applied it to the animation head reconstruction task and improved the state-of-the-art baseline to 94.453 in SSIM, 7.726 in LPIPS, and 19.575 in PSNR on average. Codes and datasets are available at https://wanghongsheng01.github.io/NOVA-3D/.
翻译:在动漫产业中,三维建模师通常依赖正面和背面的非重叠概念设计图来指导动漫角色的三维建模。然而,目前缺乏直接从这些二维设计图自动生成动漫角色的方法。鉴于此,我们探索了从非重叠视图重建动漫角色的新任务。这面临两大挑战:现有多视图方法因缺少重叠区域而无法直接应用,且全身动漫角色数据与标准基准数据集严重匮乏。为弥补这一空白,我们提出NOVA-3D(非重叠视图三维动漫角色重建)框架,该框架实现了视角感知特征融合方法以有效学习三维一致特征,并可直接从非重叠的正面和背面视图合成全身动漫角色。为促进该研究方向,我们构建了NOVA-Human数据集,包含多视图图像及精确的三维动漫角色相机参数。大量实验表明,所提方法优于基线方法,能够实现具备卓越细节保真度的动漫角色高质量重建。此外,为进一步验证方法有效性,我们将其应用于动漫头部重建任务,并在SSIM、LPIPS和PSNR指标上分别将最先进基线提升至94.453、7.726和19.575的平均值。代码与数据集已开源至https://wanghongsheng01.github.io/NOVA-3D/。