Generative models have significantly improved the generation and prediction quality on either camera images or LiDAR point clouds for autonomous driving. However, a real-world autonomous driving system uses multiple kinds of input modality, usually cameras and LiDARs, where they contain complementary information for generation, while existing generation methods ignore this crucial feature, resulting in the generated results only covering separate 2D or 3D information. In order to fill the gap in 2D-3D multi-modal joint generation for autonomous driving, in this paper, we propose our framework, \emph{HoloDrive}, to jointly generate the camera images and LiDAR point clouds. We employ BEV-to-Camera and Camera-to-BEV transform modules between heterogeneous generative models, and introduce a depth prediction branch in the 2D generative model to disambiguate the un-projecting from image space to BEV space, then extend the method to predict the future by adding temporal structure and carefully designed progressive training. Further, we conduct experiments on single frame generation and world model benchmarks, and demonstrate our method leads to significant performance gains over SOTA methods in terms of generation metrics.
翻译:生成模型在自动驾驶领域显著提升了相机图像或激光雷达点云的生成与预测质量。然而,真实世界的自动驾驶系统通常使用多种输入模态(如相机与激光雷达),这些模态包含互补的生成信息,而现有生成方法忽视了这一关键特性,导致生成结果仅涵盖独立的二维或三维信息。为填补自动驾驶中二维-三维多模态联合生成的研究空白,本文提出HoloDrive框架,以联合生成相机图像与激光雷达点云。我们在异构生成模型间引入鸟瞰图至相机与相机至鸟瞰图的转换模块,并在二维生成模型中增设深度预测分支以消除从图像空间反投影至鸟瞰图空间的歧义,进而通过添加时序结构和精心设计的渐进式训练将方法扩展至未来预测。此外,我们在单帧生成与世界模型基准测试上开展实验,结果表明本方法在生成指标上显著优于当前最优方法。