The boundary representation (B-Rep) models a 3D solid as its explicit boundaries: trimmed corners, edges, and faces. Recovering B-Rep representation from unstructured data is a challenging and valuable task of computer vision and graphics. Recent advances in deep learning have greatly improved the recovery of 3D shape geometry, but still depend on dense and clean point clouds and struggle to generalize to novel shapes. We propose B-Rep Gaussian Splatting (BrepGaussian), a novel framework that learns 3D parametric representations from 2D images. We employ a Gaussian Splatting renderer with learnable features, followed by a specific fitting strategy. To disentangle geometry reconstruction and feature learning, we introduce a two-stage learning framework that first captures geometry and edges and then refines patch features to achieve clean geometry and coherent instance representations. Extensive experiments demonstrate the superior performance of our approach to state-of-the-art methods.
翻译:边界表示(B-Rep)将三维实体建模为其显式边界:修剪后的角点、边和面。从非结构化数据中恢复B-Rep表示是计算机视觉与图形学中一项具有挑战性且价值重大的任务。近年来深度学习的进展极大改进了三维形状几何的恢复能力,但仍依赖密集且干净的点云,且难以泛化至新形状。我们提出基于边界表示的高斯泼溅(BrepGaussian)——一种从二维图像学习三维参数化表示的新框架。该方法采用具有可学习特征的高斯泼溅渲染器,并配合特定的拟合策略。为解耦几何重建与特征学习,我们引入两阶段学习框架:首先捕捉几何与边缘信息,再细化面片特征,以实现干净几何与连贯实例表示。大量实验表明,本方法在性能上优于当前最优技术。