Recent 3D generative models produce high-quality textures for 3D mesh objects. However, they commonly rely on the heavy assumption that input 3D meshes are accompanied by manual mesh parameterization (UV mapping), a manual task that requires both technical precision and artistic judgment. Industry surveys show that this process often accounts for a significant share of asset creation, creating a major bottleneck for 3D content creators. Moreover, existing automatic methods often ignore two perceptually important criteria: (1) semantic awareness (UV charts should align semantically similar 3D parts across shapes) and (2) visibility awareness (cutting seams should lie in regions unlikely to be seen). To overcome these shortcomings and to automate the mesh parameterization process, we present an unsupervised differentiable framework that augments standard geometry-preserving UV learning with semantic- and visibility-aware objectives. For semantic-awareness, our pipeline (i) segments the mesh into semantic 3D parts, (ii) applies an unsupervised learned per-part UV-parameterization backbone, and (iii) aggregates per-part charts into a unified UV atlas. For visibility-awareness, we use ambient occlusion (AO) as an exposure proxy and back-propagate a soft differentiable AO-weighted seam objective to steer cutting seams toward occluded regions. By conducting qualitative and quantitative evaluations against state-of-the-art methods, we show that the proposed method produces UV atlases that better support texture generation and reduce perceptible seam artifacts compared to recent baselines. Our implementation code is publicly available at: https://github.com/AHHHZ975/Semantic-Visibility-UV-Param.
翻译:近年来,三维生成模型能够为三维网格物体生成高质量的纹理。然而,这些模型通常严重依赖于一个前提假设,即输入的三维网格已附带手工制作的网格参数化(UV映射)。这一手工任务既需要技术精度,又依赖艺术判断力。行业调查表明,该过程通常占资产创建工作的很大比重,成为三维内容创作者面临的主要瓶颈。此外,现有的自动化方法往往忽略了两项感知上重要的准则:(1)语义感知(UV图块应在不同形状间对齐语义相似的三维部件);(2)可见性感知(切割接缝应位于不易被观察到的区域)。为克服这些缺陷并实现网格参数化过程的自动化,我们提出了一种无监督可微分框架,该框架通过引入语义感知与可见性感知的目标函数,增强了标准的几何保持型UV学习。为实现语义感知,我们的流程(i)将网格分割为语义化的三维部件,(ii)应用一个无监督学习的、针对每个部件的UV参数化主干网络,以及(iii)将每个部件的图块聚合为统一的UV图集。为实现可见性感知,我们使用环境光遮蔽(AO)作为曝光度的代理,并通过反向传播一个软性的、可微分的AO加权接缝目标函数,引导切割接缝朝向被遮蔽的区域。通过对现有最先进方法进行定性与定量评估,我们表明,与近期基线方法相比,所提出的方法生成的UV图集能更好地支持纹理生成,并减少可感知的接缝伪影。我们的实现代码已公开于:https://github.com/AHHHZ975/Semantic-Visibility-UV-Param。