Occlusions hinder point cloud frame alignment in LiDAR data, a challenge inadequately addressed by scene flow models tested mainly on occlusion-free datasets. Attempts to integrate occlusion handling within networks often suffer accuracy issues due to two main limitations: a) the inadequate use of occlusion information, often merging it with flow estimation without an effective integration strategy, and b) reliance on distance-weighted upsampling that falls short in correcting occlusion-related errors. To address these challenges, we introduce the Correlation Matrix Upsampling Flownet (CMU-Flownet), incorporating an occlusion estimation module within its cost volume layer, alongside an Occlusion-aware Cost Volume (OCV) mechanism. Specifically, we propose an enhanced upsampling approach that expands the sensory field of the sampling process which integrates a Correlation Matrix designed to evaluate point-level similarity. Meanwhile, our model robustly integrates occlusion data within the context of scene flow, deploying this information strategically during the refinement phase of the flow estimation. The efficacy of this approach is demonstrated through subsequent experimental validation. Empirical assessments reveal that CMU-Flownet establishes state-of-the-art performance within the realms of occluded Flyingthings3D and KITTY datasets, surpassing previous methodologies across a majority of evaluated metrics.
翻译:遮挡问题阻碍了激光雷达数据中点云帧的对齐,而主要基于无遮挡数据集测试的场景流模型未能充分解决这一挑战。现有尝试在网络中集成遮挡处理的方法常因两大局限而存在精度不足:a) 对遮挡信息的利用不充分,往往将遮挡信息与流估计简单合并,缺乏有效的融合策略;b) 依赖基于距离加权上采样方法,难以修正遮挡相关误差。针对上述问题,我们提出相关矩阵上采样流网络(CMU-Flownet),在其代价体层中引入遮挡估计模块,并设计遮挡感知代价体(OCV)机制。具体而言,我们提出一种增强型上采样方法,通过扩展采样过程的感知场并集成用于评估点级相似性的相关矩阵。同时,模型将遮挡数据稳健地融入场景流上下文,并在流估计的细化阶段策略性地利用这些信息。后续实验验证了该方法的有效性。实验评估表明,CMU-Flownet在遮挡Flyingthings3D与KITTY数据集上均取得最先进性能,在绝大多数评估指标上超越此前方法。