Vision-language models (VLMs) have made significant strides in reasoning, yet they often struggle with complex multimodal tasks and tend to generate overly verbose outputs. A key limitation is their reliance on chain-of-thought (CoT) reasoning, despite many tasks benefiting from alternative topologies like trees or graphs. To address this, we introduce STELAR-Vision, a training framework for topology-aware reasoning. At its core is TopoAug, a synthetic data pipeline that enriches training with diverse topological structures. Using supervised fine-tuning and reinforcement learning, we post-train Qwen2VL models with both accuracy and efficiency in mind. Additionally, we propose Frugal Learning, which reduces output length with minimal accuracy loss. On MATH-V and VLM-S2H, STELAR-Vision improves accuracy by 9.7% over its base model and surpasses the larger Qwen2VL-72B-Instruct by 7.3%. On five out-of-distribution benchmarks, it outperforms Phi-4-Multimodal-Instruct by up to 28.4% and LLaMA-3.2-11B-Vision-Instruct by up to 13.2%, demonstrating strong generalization. Compared to Chain-Only training, our approach achieves 4.3% higher overall accuracy on in-distribution datasets and consistently outperforms across all OOD benchmarks.
翻译:视觉语言模型(VLMs)在推理方面已取得显著进展,但其在处理复杂多模态任务时仍面临困难,且倾向于生成过于冗长的输出。一个关键局限在于它们依赖思维链(CoT)推理,而许多任务其实能从树或图等替代拓扑结构中获益。为此,我们提出了STELAR-VISION,一个用于拓扑感知推理的训练框架。其核心是TopoAug,一个通过多样化拓扑结构丰富训练数据的合成数据流水线。我们采用监督微调和强化学习,以兼顾准确性和效率为目标对Qwen2VL模型进行后训练。此外,我们提出了节俭学习(Frugal Learning)方法,能在最小化精度损失的同时减少输出长度。在MATH-V和VLM-S2H数据集上,STELAR-VISION相比其基础模型准确率提升了9.7%,并超越了更大的Qwen2VL-72B-Instruct模型7.3%。在五个分布外基准测试中,其表现优于Phi-4-Multimodal-Instruct模型最高达28.4%,优于LLaMA-3.2-11B-Vision-Instruct模型最高达13.2%,展现出强大的泛化能力。与仅使用链式结构训练的方法相比,我们的方法在分布内数据集上实现了4.3%的整体准确率提升,并在所有分布外基准测试中均表现更优。