Recent advances in pre-trained language models (PLMs) have demonstrated their capabilities in capturing universal knowledge, making them promising for radar signal processing applications. Nevertheless, directly fine-tuning PLMs on radar signals is both computationally expensive and prone to overfitting, particularly in low signal-to-clutter ratio (SCR) environments. In this paper, we propose a novel fine-tuning framework for PLM-based marine radar target detection. First, we design a lightweight adaptation module, enabling computationally efficient fine-tuning while preserving the pre-trained model's general knowledge. Second, a novel preference-aware loss is developed to selectively optimize different feature patches based on their online-evaluated learning values, guiding the model to concentrate on those generalizable feature patterns during optimization. Finally, a binary classification head is retrained based on autoencoder network to further enhance detection performance. Experiments on real-world radar data show that the proposed RadarPLM framework yields at least a 6.35% improvement in detection performance over the existing networks under low SCR conditions. Especially, in small training samples cases,the proposed RadarPLM also achieves significant advantage over existing networks owing to the incorporation of the PLM.
翻译:预训练语言模型(PLMs)的最新进展展现了其在捕获通用知识方面的强大能力,这使其在雷达信号处理应用中展现出巨大潜力。然而,直接在雷达信号上对PLMs进行微调不仅计算成本高昂,且容易过拟合,尤其是在低信杂比(SCR)环境下。本文提出了一种新颖的基于PLM的海洋雷达目标检测微调框架。首先,我们设计了一个轻量级的适配模块,在保留预训练模型通用知识的同时,实现了计算高效的微调。其次,我们开发了一种新颖的偏好感知损失函数,根据在线评估的学习价值选择性优化不同的特征块,引导模型在优化过程中专注于那些可泛化的特征模式。最后,基于自编码器网络重新训练了一个二元分类头,以进一步提升检测性能。在真实雷达数据上的实验表明,在低SCR条件下,所提出的RadarPLM框架相比现有网络在检测性能上至少提升了6.35%。特别是在小训练样本情况下,得益于PLM的引入,所提出的RadarPLM相比现有网络也取得了显著优势。