Prompt learning (PL) has emerged as an effective strategy to adapt vision-language models (VLMs), such as CLIP, for downstream tasks under limited supervision. While PL has demonstrated strong generalization on natural image datasets, its transferability to remote sensing (RS) imagery remains underexplored. RS data present unique challenges, including multi-label scenes, high intra-class variability, and diverse spatial resolutions, that hinder the direct applicability of existing PL methods. In particular, current prompt-based approaches often struggle to identify dominant semantic cues and fail to generalize to novel classes in RS scenarios. To address these challenges, we propose BiMoRS, a lightweight bi-modal prompt learning framework tailored for RS tasks. BiMoRS employs a frozen image captioning model (e.g., BLIP-2) to extract textual semantic summaries from RS images. These captions are tokenized using a BERT tokenizer and fused with high-level visual features from the CLIP encoder. A lightweight cross-attention module then conditions a learnable query prompt on the fused textual-visual representation, yielding contextualized prompts without altering the CLIP backbone. We evaluate BiMoRS on four RS datasets across three domain generalization (DG) tasks and observe consistent performance gains, outperforming strong baselines by up to 2% on average. Codes are available at https://github.com/ipankhi/BiMoRS.
翻译:提示学习已成为一种有效策略,能够使视觉语言模型(如CLIP)在有限监督下适应下游任务。尽管提示学习在自然图像数据集上展现出强大的泛化能力,但其对遥感影像的可迁移性仍未得到充分探索。遥感数据具有多标签场景、高类内变异性和多样空间分辨率等独特挑战,阻碍了现有提示学习方法的直接适用性。特别地,当前基于提示的方法往往难以识别主导语义线索,且无法泛化至遥感场景中的新类别。为应对这些挑战,我们提出了BiMoRS——一个专为遥感任务设计的轻量级双模态提示学习框架。BiMoRS采用冻结的图像描述模型(如BLIP-2)从遥感图像中提取文本语义摘要。这些描述通过BERT分词器进行标记化,并与CLIP编码器提取的高层视觉特征相融合。随后,一个轻量级交叉注意力模块基于融合的文本-视觉表征对可学习的查询提示进行条件化处理,从而在不改变CLIP主干网络的情况下生成上下文感知的提示。我们在四个遥感数据集上针对三个领域泛化任务评估了BiMoRS,观察到其性能持续提升,平均优于强基线方法达2%。代码发布于https://github.com/ipankhi/BiMoRS。