Millimeter-wave radar enables robust environment perception in autonomous systems under adverse conditions yet suffers from sparse, noisy point clouds with low angular resolution. Existing diffusion-based radar enhancement methods either incur high learning complexity by modeling full LiDAR distributions or fail to prioritize critical structures due to uniform regional processing. To address these issues, we propose R3D, a regional-guided residual radar diffusion framework that integrates residual diffusion modeling-focusing on the concentrated LiDAR-radar residual encoding complementary high-frequency details to reduce learning difficulty-and sigma-adaptive regional guidance-leveraging radar-specific signal properties to generate attention maps and applying lightweight guidance only in low-noise stages to avoid gradient imbalance while refining key regions. Extensive experiments on the ColoRadar dataset demonstrate that R3D outperforms state-of-the-art methods, providing a practical solution for radar perception enhancement. Our anonymous code and pretrained models are released here: https://anonymous.4open.science/r/r3d-F836
翻译:毫米波雷达能够在恶劣条件下为自主系统提供鲁棒的环境感知,但其点云稀疏、噪声大且角分辨率低。现有的基于扩散的雷达增强方法要么通过建模完整的激光雷达分布而带来较高的学习复杂度,要么因采用均匀的区域处理而未能优先处理关键结构。为解决这些问题,我们提出了R3D,一种区域引导的残差雷达扩散框架。该框架集成了残差扩散建模——专注于集中的激光雷达-雷达残差以编码互补的高频细节,从而降低学习难度——以及sigma自适应区域引导——利用雷达特有的信号特性生成注意力图,并仅在低噪声阶段应用轻量级引导,以避免梯度失衡,同时细化关键区域。在ColoRadar数据集上进行的大量实验表明,R3D的性能优于现有最先进方法,为雷达感知增强提供了一个实用的解决方案。我们的匿名代码和预训练模型已在此处发布:https://anonymous.4open.science/r/r3d-F836