Vision-Language Models (VLMs) in remote sensing often fail at complex analytical tasks, a limitation stemming from their end-to-end training paradigm that bypasses crucial reasoning steps and leads to unverifiable outputs. To address this limitation, we introduce the Perceptually-Grounded Geospatial Chain-of-Thought (Geo-CoT), a framework that models remote sensing analysis as a verifiable, multi-step process. We instill this analytical process through a two-stage alignment strategy, leveraging Geo-CoT380k, the first large-scale dataset of structured Geo-CoT rationales. This strategy first employs supervised fine-tuning (SFT) to instill the foundational cognitive architecture, then leverages Group Reward Policy Optimization (GRPO) to refine the model's reasoning policy towards factual correctness. The resulting model, RSThinker, outputs both a final answer and its justifying, verifiable analytical trace. This capability yields dominant performance, significantly outperforming state-of-the-art models across a comprehensive range of tasks. The public release of our Geo-CoT380k dataset and RSThinker model upon publication serves as a concrete pathway from opaque perception towards structured, verifiable reasoning for Earth Observation.
翻译:遥感领域的视觉语言模型(VLMs)在处理复杂分析任务时常显不足,这一局限性源于其端到端的训练范式,该范式绕过了关键的推理步骤,导致输出结果难以验证。为应对此局限,我们提出了感知基础地理空间思维链(Geo-CoT),该框架将遥感分析建模为一个可验证的多步骤过程。我们通过两阶段对齐策略,并利用首个大规模结构化Geo-CoT原理数据集Geo-CoT380k,来植入这一分析流程。该策略首先采用监督微调(SFT)来建立基础认知架构,随后利用组奖励策略优化(GRPO)来优化模型的推理策略,以提升事实准确性。所得模型RSThinker能够同时输出最终答案及其可验证的、具有论证性的分析轨迹。这一能力使其取得了显著优势,在一系列综合性任务中大幅超越了现有最先进模型。我们将在论文发表时公开Geo-CoT380k数据集和RSThinker模型,这为地球观测从“黑箱”感知迈向结构化、可验证的推理提供了一条具体路径。