Recent advancements have showcased the potential of handheld millimeter-wave (mmWave) imaging, which applies synthetic aperture radar (SAR) principles in portable settings. However, existing studies addressing handheld motion errors either rely on costly tracking devices or employ simplified imaging models, leading to impractical deployment or limited performance. In this paper, we present IFNet, a novel deep unfolding network that combines the strengths of signal processing models and deep neural networks to achieve robust imaging and focusing for handheld mmWave systems. We first formulate the handheld imaging model by integrating multiple priors about mmWave images and handheld phase errors. Furthermore, we transform the optimization processes into an iterative network structure for improved and efficient imaging performance. Extensive experiments demonstrate that IFNet effectively compensates for handheld phase errors and recovers high-fidelity images from severely distorted signals. In comparison with existing methods, IFNet can achieve at least 11.89 dB improvement in average peak signal-to-noise ratio (PSNR) and 64.91% improvement in average structural similarity index measure (SSIM) on a real-world dataset.
翻译:近期研究展示了手持毫米波成像的潜力,该技术将合成孔径雷达(SAR)原理应用于便携式场景。然而,现有处理手持运动误差的研究或依赖昂贵跟踪设备,或采用简化成像模型,导致部署不切实际或性能受限。本文提出IFNet——一种新型深度展开网络,融合信号处理模型与深度神经网络的优势,实现手持毫米波系统的鲁棒成像与聚焦。我们首先通过整合毫米波图像与手持相位误差的多重先验信息,构建手持成像模型;进而将优化过程转化为迭代网络结构,以提升成像性能与效率。大量实验表明,IFNet能够有效补偿手持相位误差,从严重畸变信号中恢复高保真图像。在真实数据集上,相比现有方法,IFNet的平均峰值信噪比(PSNR)提升至少11.89 dB,平均结构相似性指数(SSIM)提升64.91%。