In point cloud geometry compression, context models usually use the one-hot encoding of node occupancy as the label, and the cross-entropy between the one-hot encoding and the probability distribution predicted by the context model as the loss function. However, this approach has two main weaknesses. First, the differences between contexts of different nodes are not significant, making it difficult for the context model to accurately predict the probability distribution of node occupancy. Second, as the one-hot encoding is not the actual probability distribution of node occupancy, the cross-entropy loss function is inaccurate. To address these problems, we propose a general structure that can enhance existing context models. We introduce the context feature residuals into the context model to amplify the differences between contexts. We also add a multi-layer perception branch, that uses the mean squared error between its output and node occupancy as a loss function to provide accurate gradients in backpropagation. We validate our method by showing that it can improve the performance of an octree-based model (OctAttention) and a voxel-based model (VoxelDNN) on the object point cloud datasets MPEG 8i and MVUB, as well as the LiDAR point cloud dataset SemanticKITTI.
翻译:在点云几何压缩中,上下文模型通常将节点占用的独热编码作为标签,并将该独热编码与上下文模型预测的概率分布之间的交叉熵作为损失函数。然而,这种方法存在两个主要缺点。首先,不同节点上下文之间的差异不显著,使得上下文模型难以准确预测节点占用的概率分布。其次,由于独热编码并非节点占用的实际概率分布,交叉熵损失函数并不精确。为解决这些问题,我们提出了一种能够增强现有上下文模型的通用结构。我们在上下文模型中引入了上下文特征残差,以放大上下文之间的差异。我们还增加了一个多层感知器分支,该分支使用其输出与节点占用之间的均方误差作为损失函数,以在反向传播中提供准确的梯度。我们通过实验验证了所提方法能够提升基于八叉树的模型(OctAttention)和基于体素的模型(VoxelDNN)在物体点云数据集 MPEG 8i 和 MVUB 以及 LiDAR 点云数据集 SemanticKITTI 上的性能。