In the realm of autonomous driving, accurate 3D perception is the foundation. However, developing such models relies on extensive human annotations -- a process that is both costly and labor-intensive. To address this challenge from a data representation learning perspective, we introduce SuperFlow, a novel framework designed to harness consecutive LiDAR-camera pairs for establishing spatiotemporal pretraining objectives. SuperFlow stands out by integrating two key designs: 1) a dense-to-sparse consistency regularization, which promotes insensitivity to point cloud density variations during feature learning, and 2) a flow-based contrastive learning module, carefully crafted to extract meaningful temporal cues from readily available sensor calibrations. To further boost learning efficiency, we incorporate a plug-and-play view consistency module that enhances the alignment of the knowledge distilled from camera views. Extensive comparative and ablation studies across 11 heterogeneous LiDAR datasets validate our effectiveness and superiority. Additionally, we observe several interesting emerging properties by scaling up the 2D and 3D backbones during pretraining, shedding light on the future research of 3D foundation models for LiDAR-based perception.
翻译:在自动驾驶领域,精确的三维感知是基础。然而,开发此类模型依赖于大量人工标注——这一过程既昂贵又劳动密集。为了从数据表征学习的角度应对这一挑战,我们提出了SuperFlow,一个旨在利用连续LiDAR-相机对建立时空预训练目标的新型框架。SuperFlow通过整合两个关键设计而脱颖而出:1)稠密到稀疏一致性正则化,该设计在特征学习过程中增强对点云密度变化的鲁棒性;2)基于流的对比学习模块,精心设计以从易获取的传感器标定数据中提取有意义的时序线索。为进一步提升学习效率,我们引入了一个即插即用的视图一致性模块,以增强从相机视图蒸馏的知识的对齐效果。在11个异构LiDAR数据集上进行的广泛对比与消融研究验证了本方法的有效性和优越性。此外,通过在预训练中扩展二维与三维骨干网络,我们观察到若干有趣的涌现特性,这为未来基于LiDAR感知的三维基础模型研究提供了启示。