Inverse Synthetic Aperture Radar (ISAR) imaging presents a formidable challenge when it comes to small everyday objects due to their limited Radar Cross-Section (RCS) and the inherent resolution constraints of radar systems. Existing ISAR reconstruction methods including backprojection (BP) often require complex setups and controlled environments, rendering them impractical for many real-world noisy scenarios. In this paper, we propose a novel Analysis-through-Synthesis (ATS) framework enabled by Neural Radiance Fields (NeRF) for high-resolution coherent ISAR imaging of small objects using sparse and noisy Ultra-Wideband (UWB) radar data with an inexpensive and portable setup. Our end-to-end framework integrates ultra-wideband radar wave propagation, reflection characteristics, and scene priors, enabling efficient 2D scene reconstruction without the need for costly anechoic chambers or complex measurement test beds. With qualitative and quantitative comparisons, we demonstrate that the proposed method outperforms traditional techniques and generates ISAR images of complex scenes with multiple targets and complex structures in Non-Line-of-Sight (NLOS) and noisy scenarios, particularly with limited number of views and sparse UWB radar scans. This work represents a significant step towards practical, cost-effective ISAR imaging of small everyday objects, with broad implications for robotics and mobile sensing applications.
翻译:逆合成孔径雷达(ISAR)成像在日常小物体检测中面临严峻挑战,这主要源于其有限的雷达散射截面积(RCS)以及雷达系统固有的分辨率限制。现有ISAR重建方法(包括反投影算法)通常需要复杂配置和受控环境,难以适用于现实世界中普遍存在的噪声场景。本文提出一种基于神经辐射场(NeRF)的新型合成分析(ATS)框架,通过低成本便携式设备采集的稀疏噪声超宽带(UWB)雷达数据,实现对小型物体的高分辨率相干ISAR成像。该端到端框架整合了超宽带雷达波传播特性、反射特征与场景先验知识,无需昂贵微波暗室或复杂测量平台即可实现高效的二维场景重建。通过定性与定量对比实验,我们证明所提方法在非视距(NLOS)与噪声场景下——特别是在有限视角与稀疏UWB雷达扫描条件下——优于传统技术,能够生成包含多目标及复杂结构的复合场景ISAR图像。本工作为实现经济实用的日常小物体ISAR成像迈出重要一步,对机器人技术与移动传感应用具有广泛意义。