In this paper, we present KP-RED, a unified KeyPoint-driven REtrieval and Deformation framework that takes object scans as input and jointly retrieves and deforms the most geometrically similar CAD models from a pre-processed database to tightly match the target. Unlike existing dense matching based methods that typically struggle with noisy partial scans, we propose to leverage category-consistent sparse keypoints to naturally handle both full and partial object scans. Specifically, we first employ a lightweight retrieval module to establish a keypoint-based embedding space, measuring the similarity among objects by dynamically aggregating deformation-aware local-global features around extracted keypoints. Objects that are close in the embedding space are considered similar in geometry. Then we introduce the neural cage-based deformation module that estimates the influence vector of each keypoint upon cage vertices inside its local support region to control the deformation of the retrieved shape. Extensive experiments on the synthetic dataset PartNet and the real-world dataset Scan2CAD demonstrate that KP-RED surpasses existing state-of-the-art approaches by a large margin. Codes and trained models are released on https://github.com/lolrudy/KP-RED.
翻译:本文提出KP-RED,一种统一的关键点驱动检索与变形框架。该框架以物体扫描为输入,从预处理数据库中联合检索并变形几何最相似的CAD模型,使其紧密匹配目标物体。与现有基于稠密匹配的方法(通常难以处理噪声严重的局部扫描)不同,我们提出利用类别一致的稀疏关键点来自然地处理完整与局部物体扫描。具体而言,我们首先采用轻量级检索模块构建基于关键点的嵌入空间,通过动态聚合提取关键点周围具有变形感知的局部-全局特征来度量物体间的相似性。在嵌入空间中接近的物体被视为几何相似。随后,我们引入基于神经笼的变形模块,该模块估计每个关键点对其局部支持区域内笼顶点的影响向量,以控制检索形状的变形。在合成数据集PartNet和真实数据集Scan2CAD上进行的大量实验表明,KP-RED大幅超越了现有的最先进方法。代码与训练模型发布于 https://github.com/lolrudy/KP-RED。