The boundary representation (B-rep) models a 3D solid as its explicit boundaries: trimmed corners, edges, and faces. Recovering B-rep representation from unstructured data is a challenging and valuable task of computer vision and graphics. Recent advances in deep learning have greatly improved the recovery of 3D shape geometry, but still depend on dense and clean point clouds and struggle to generalize to novel shapes. We propose B-rep Gaussian Splatting (BrepGaussian), a novel framework that learns 3D parametric representations from 2D images. We employ a Gaussian Splatting renderer with learnable features, followed by a specific fitting strategy. To disentangle geometry reconstruction and feature learning, we introduce a two-stage learning framework that first captures geometry and edges and then refines patch features to achieve clean geometry and coherent instance representations. Extensive experiments demonstrate the superior performance of our approach to state-of-the-art methods. We will release our code and datasets upon acceptance.
翻译:边界表示(B-rep)模型将三维实体建模为其显式边界:裁剪后的角点、边和面。从非结构化数据中恢复B-rep表示是计算机视觉与图形学中一项具有挑战性且极具价值的任务。深度学习的最新进展极大地改善了三维形状几何的恢复效果,但仍依赖于稠密且干净的点云数据,且难以泛化到新颖形状。我们提出了B-rep高斯泼溅(BrepGaussian),这是一个从二维图像学习三维参数化表示的全新框架。我们采用具有可学习特征的高斯泼溅渲染器,并配合特定的拟合策略。为解耦几何重建与特征学习,我们引入了一个两阶段学习框架:首先捕获几何与边缘信息,随后细化面片特征以获得干净的几何结构与连贯的实例表示。大量实验证明,我们的方法在性能上显著优于现有最先进技术。我们将在论文被接受后公开代码与数据集。