Wireless imaging is emerging as a key capability in next-generation integrated sensing and communication (ISAC) systems, supporting diverse context-aware applications. However, conventional imaging approaches, whether based on physical models or data-driven learning, face challenges such as accurate multipath separation and representative dataset acquisition. To address these issues, this study explores the use of implicit neural representation (INR), a paradigm that has achieved notable advancements in computer vision, for wireless imaging in reconfigurable intelligent surface-aided ISAC systems. The neural network of INR is specifically designed with positional encoding and sine activation functions. Leveraging physics-informed loss functions, INR is optimized through deep learning to represent continuous target shapes and scattering profiles, enabling resolution-agnostic imaging with strong generalization capability. Extensive simulations demonstrate that the proposed INR-based method achieves significant improvements over state-of-the-art techniques and further reveals the focal length characteristics of the imaging system.
翻译:无线成像正成为下一代集成感知与通信(ISAC)系统的关键能力,可支持多样化的情境感知应用。然而,传统成像方法(无论是基于物理模型还是数据驱动学习)都面临着精确多径分离和代表性数据集获取等挑战。为解决这些问题,本研究探索将计算机视觉领域取得显著进展的隐式神经表示(INR)范式应用于可重构智能表面辅助ISAC系统的无线成像。INR的神经网络专门设计了位置编码和正弦激活函数。通过结合物理信息损失函数,利用深度学习优化INR以表示连续的目标形状和散射剖面,从而实现具有强泛化能力的分辨率无关成像。大量仿真实验表明,所提出的基于INR的方法相较现有先进技术取得显著改进,并进一步揭示了成像系统的焦距特性。