We present a lightweight two-stage framework for joint geometry and color inpainting of damaged 3D objects, motivated by the digital restoration of cultural heritage artifacts. The pipeline separates damage localization from reconstruction. In the first stage, a 2D convolutional network predicts damage masks on RGB slices extracted from a voxelized object, and these predictions are aggregated into a volumetric mask. In the second stage, a diffusion-based 3D U-Net performs mask-conditioned inpainting directly on voxel grids, reconstructing geometry and color while preserving observed regions. The model jointly predicts occupancy and color using a composite objective that combines occupancy reconstruction with masked color reconstruction and perceptual regularization. We evaluate the approach on a curated set of textured artifacts with synthetically generated damage using standard geometric and color metrics. Compared to symmetry-based baselines, our method produces more complete geometry and more coherent color reconstructions at a fixed 32^3 resolution. Overall, the results indicate that explicit mask conditioning is a practical way to guide volumetric diffusion models for joint 3D geometry and color inpainting.
翻译:我们提出了一种轻量级的两阶段框架,用于受损三维物体的联合几何与颜色修复,其动机源于文化遗产器物的数字化复原。该流程将损伤定位与重建分离。在第一阶段,一个二维卷积网络对从体素化物体中提取的RGB切片预测损伤掩码,并将这些预测聚合为体积掩码。在第二阶段,一个基于扩散的三维U-Net直接在体素网格上执行掩码条件修复,在保留观测区域的同时重建几何与颜色。该模型使用复合目标函数联合预测占据率和颜色,该目标结合了占据率重建、掩码颜色重建与感知正则化。我们使用标准几何与颜色指标,在包含合成生成损伤的纹理化器物数据集上评估了该方法。与基于对称性的基线方法相比,我们的方法在固定的32^3分辨率下产生了更完整的几何结构和更一致的颜色重建结果。总体而言,结果表明显式的掩码条件是一种实用的引导方式,可用于指导体积扩散模型进行三维几何与颜色的联合修复。