Recent advances in pre-trained language models (PLMs) have demonstrated their capabilities in capturing universal knowledge, making them promising for radar signal processing applications. Nevertheless, directly fine-tuning PLMs on radar signals is both computationally expensive and prone to overfitting, particularly in low signal-to-clutter ratio (SCR) environments. In this paper, we propose a fine-tuning framework for PLM-based marine radar target detection. First, we design a lightweight adaptation module, enabling computationally efficient fine-tuning while preserving the pre-trained model's general knowledge. Second, a novel preference-aware loss is developed to selectively optimize different feature patches based on their online-evaluated learning values, guiding the model to concentrate on those generalizable feature patterns during optimization. Finally, a binary classification head is retrained based on autoencoder network to further enhance detection performance. Experiments on real-world radar data show that the proposed RadarPLM framework yields at least a 6.35% improvement in detection performance over the existing networks under low SCR conditions. Especially, in the small-sample training cases, the proposed RadarPLM also achieves a significant advantage over existing networks owing to the incorporation of the PLM.
翻译:预训练语言模型(PLM)的最新进展已证明其在捕获通用知识方面的能力,使其在雷达信号处理应用中前景广阔。然而,直接在雷达信号上微调PLM不仅计算成本高昂,且容易过拟合,尤其是在低信杂比(SCR)环境下。本文提出了一种基于PLM的船载雷达目标检测微调框架。首先,我们设计了一个轻量级适配模块,在保持预训练模型通用知识的同时,实现计算高效的微调。其次,开发了一种新颖的偏好感知损失函数,根据在线评估的学习价值选择性优化不同特征块,引导模型在优化过程中专注于那些可泛化的特征模式。最后,基于自编码器网络重新训练了一个二元分类头,以进一步提升检测性能。在真实雷达数据上的实验表明,在低SCR条件下,所提出的RadarPLM框架相比现有网络在检测性能上至少提升了6.35%。特别是在小样本训练场景中,得益于PLM的引入,所提出的RadarPLM相比现有网络同样取得了显著优势。