Recently, graph-based and Transformer-based deep learning networks have demonstrated excellent performances on various point cloud tasks. Most of the existing graph methods are based on static graph, which take a fixed input to establish graph relations. Moreover, many graph methods apply maximization and averaging to aggregate neighboring features, so that only a single neighboring point affects the feature of centroid or different neighboring points have the same influence on the centroid's feature, which ignoring the correlation and difference between points. Most Transformer-based methods extract point cloud features based on global attention and lack the feature learning on local neighbors. To solve the problems of these two types of models, we propose a new feature extraction block named Graph Transformer and construct a 3D point point cloud learning network called GTNet to learn features of point clouds on local and global patterns. Graph Transformer integrates the advantages of graph-based and Transformer-based methods, and consists of Local Transformer and Global Transformer modules. Local Transformer uses a dynamic graph to calculate all neighboring point weights by intra-domain cross-attention with dynamically updated graph relations, so that every neighboring point could affect the features of centroid with different weights; Global Transformer enlarges the receptive field of Local Transformer by a global self-attention. In addition, to avoid the disappearance of the gradient caused by the increasing depth of network, we conduct residual connection for centroid features in GTNet; we also adopt the features of centroid and neighbors to generate the local geometric descriptors in Local Transformer to strengthen the local information learning capability of the model. Finally, we use GTNet for shape classification, part segmentation and semantic segmentation tasks in this paper.
翻译:近年来,基于图和基于变换器的深度学习网络在各种点云任务中展现出卓越性能。现有图方法大多基于静态图,即采用固定输入建立图关系。此外,许多图方法通过最大化与平均化操作聚合邻域特征,导致仅单个邻域点影响质心特征,或不同邻域点对质心特征产生相同影响,忽略了点之间的关联性与差异性。大多数基于变换器的方法基于全局注意力提取点云特征,缺乏对局部邻域的特征学习。为解决这两类模型的问题,我们提出名为图变换器的新特征提取模块,并构建称为GTNet的三维点云学习网络,以学习点云在局部与全局模式上的特征。图变换器融合了基于图与基于变换器方法的优势,由局部变换器与全局变换器模块构成。局部变换器采用动态图,通过具有动态更新图关系的域内交叉注意力计算所有邻域点权重,使得每个邻域点能以不同权重影响质心特征;全局变换器通过全局自注意力机制扩大局部变换器的感受野。此外,为避免网络深度增加导致的梯度消失,我们在GTNet中对质心特征实施残差连接;同时采用质心与邻域点特征在局部变换器中生成局部几何描述符,以增强模型的局部信息学习能力。最后,本文使用GTNet完成形状分类、部件分割与语义分割任务。