We present WidthFormer, a novel transformer-based module to compute Bird's-Eye-View (BEV) representations from multi-view cameras for real-time autonomous-driving applications. WidthFormer is computationally efficient, robust and does not require any special engineering effort to deploy. We first introduce a novel 3D positional encoding mechanism capable of accurately encapsulating 3D geometric information, which enables our model to compute high-quality BEV representations with only a single transformer decoder layer. This mechanism is also beneficial for existing sparse 3D object detectors. Inspired by the recently proposed works, we further improve our model's efficiency by vertically compressing the image features when serving as attention keys and values, and then we develop two modules to compensate for potential information loss due to feature compression. Experimental evaluation on the widely-used nuScenes 3D object detection benchmark demonstrates that our method outperforms previous approaches across different 3D detection architectures. More importantly, our model is highly efficient. For example, when using $256\times 704$ input images, it achieves 1.5 ms and 2.8 ms latency on NVIDIA 3090 GPU and Horizon Journey-5 computation solutions. Furthermore, WidthFormer also exhibits strong robustness to different degrees of camera perturbations. Our study offers valuable insights into the deployment of BEV transformation methods in real-world, complex road environments. Code is available at https://github.com/ChenhongyiYang/WidthFormer .
翻译:本文提出WidthFormer,一种新颖的基于Transformer的模块,用于从多视角摄像头计算鸟瞰图(BEV)表示,适用于实时自动驾驶应用。WidthFormer具有计算高效、鲁棒性强且无需特殊工程部署的特点。我们首先提出一种能够精确封装三维几何信息的新型三维位置编码机制,该机制使我们的模型仅需单个Transformer解码器层即可计算高质量的BEV表示。该机制对现有稀疏三维物体检测器同样具有增益效果。受近期研究启发,我们通过垂直压缩图像特征作为注意力键与值来进一步提升模型效率,并开发了两个模块以补偿特征压缩可能导致的信息损失。在广泛使用的nuScenes三维物体检测基准上的实验评估表明,我们的方法在不同三维检测架构中均优于现有方案。更重要的是,我们的模型具有极高效率:例如使用$256\times 704$输入图像时,在NVIDIA 3090 GPU和地平线Journey-5计算方案上分别实现1.5毫秒与2.8毫秒的延迟。此外,WidthFormer对不同程度的摄像头扰动也表现出强鲁棒性。本研究为BEV转换方法在现实复杂道路环境中的部署提供了重要参考。代码发布于https://github.com/ChenhongyiYang/WidthFormer。