Collaborative perception significantly enhances individual vehicle perception performance through the exchange of sensory information among agents. However, real-world deployment faces challenges due to bandwidth constraints and inevitable calibration errors during information exchange. To address these issues, we propose mmCooper, a novel multi-agent, multi-stage, communication-efficient, and collaboration-robust cooperative perception framework. Our framework leverages a multi-stage collaboration strategy that dynamically and adaptively balances intermediate- and late-stage information to share among agents, enhancing perceptual performance while maintaining communication efficiency. To support robust collaboration despite potential misalignments and calibration errors, our framework captures multi-scale contextual information for robust fusion in the intermediate stage and calibrates the received detection results to improve accuracy in the late stage. We validate the effectiveness of mmCooper through extensive experiments on real-world and simulated datasets. The results demonstrate the superiority of our proposed framework and the effectiveness of each component.
翻译:协同感知通过智能体之间的传感器信息交换,显著提升了单车的感知性能。然而,由于带宽限制以及信息交换过程中不可避免的标定误差,其在实际部署中面临挑战。为解决这些问题,我们提出了mmCooper,一种新颖的多智能体、多阶段、通信高效且协作鲁棒的协同感知框架。我们的框架采用多阶段协作策略,动态且自适应地平衡智能体之间共享的中间阶段与后期阶段信息,在保持通信效率的同时提升感知性能。为了在存在潜在错位和标定误差的情况下支持鲁棒协作,我们的框架在中间阶段捕获多尺度上下文信息以实现鲁棒融合,并在后期阶段对接收到的检测结果进行标定以提高精度。我们在真实世界和模拟数据集上进行了大量实验,验证了mmCooper的有效性。结果证明了我们提出的框架的优越性以及各组成部分的有效性。