Utilizing uniformly distributed sparse annotations, weakly supervised learning alleviates the heavy reliance on fine-grained annotations in point cloud semantic segmentation tasks. However, few works discuss the inhomogeneity of sparse annotations, albeit it is common in real-world scenarios. Therefore, this work introduces the probability density function into the gradient sampling approximation method to qualitatively analyze the impact of annotation sparsity and inhomogeneity under weakly supervised learning. Based on our analysis, we propose an Adaptive Annotation Distribution Network (AADNet) capable of robust learning on arbitrarily distributed sparse annotations. Specifically, we propose a label-aware point cloud downsampling strategy to increase the proportion of annotations involved in the training stage. Furthermore, we design the multiplicative dynamic entropy as the gradient calibration function to mitigate the gradient bias caused by non-uniformly distributed sparse annotations and explicitly reduce the epistemic uncertainty. Without any prior restrictions and additional information, our proposed method achieves comprehensive performance improvements at multiple label rates and different annotation distributions.
翻译:利用均匀分布的稀疏标注,弱监督学习缓解了点云语义分割任务对细粒度标注的严重依赖。然而,尽管稀疏标注的非均匀性在现实场景中普遍存在,却鲜有工作对此进行探讨。因此,本研究将概率密度函数引入梯度采样近似方法,以定性分析弱监督学习下标注稀疏性与非均匀性的影响。基于分析,我们提出了一种自适应标注分布网络(AADNet),能够在任意分布的稀疏标注上进行鲁棒学习。具体而言,我们提出了一种标签感知的点云下采样策略,以增加训练阶段参与标注的比例。此外,我们设计了乘性动态熵作为梯度校准函数,以缓解由非均匀分布稀疏标注引起的梯度偏差,并显式降低认知不确定性。在无需任何先验限制与额外信息的情况下,所提方法在多种标注比例及不同标注分布下均实现了全面的性能提升。