Any entity in the visual world can be hierarchically grouped based on shared characteristics and mapped to fine-grained sub-categories. While Multi-modal Large Language Models (MLLMs) achieve strong performance on coarse-grained visual tasks, they often struggle with Fine-Grained Visual Recognition (FGVR). Adapting general-purpose MLLMs to FGVR typically requires large amounts of annotated data, which is costly to obtain, leaving a substantial performance gap compared to contrastive CLIP models dedicated for discriminative tasks. Moreover, MLLMs tend to overfit to seen sub-categories and generalize poorly to unseen ones. To address these challenges, we propose Fine-R1, an MLLM tailored for FGVR through an R1-style training framework: (1) Chain-of-Thought Supervised Fine-tuning, where we construct a high-quality FGVR CoT dataset with rationales of "visual analysis, candidate sub-categories, comparison, and prediction", transition the model into a strong open-world classifier; and (2) Triplet Augmented Policy Optimization, where Intra-class Augmentation mixes trajectories from anchor and positive images within the same category to improve robustness to intra-class variance, while Inter-class Augmentation maximizes the response distinction conditioned on images across sub-categories to enhance discriminative ability. With only 4-shot training, Fine-R1 outperforms existing general MLLMs, reasoning MLLMs, and even contrastive CLIP models in identifying both seen and unseen sub-categories, showing promise in working in knowledge-intensive domains where gathering expert annotations for all sub-categories is arduous. Code is available at https://github.com/PKU-ICST-MIPL/FineR1_ICLR2026.
翻译:现实世界中的任何实体均可依据共享特征进行层次化分组并映射到细粒度子类别。尽管多模态大语言模型(MLLMs)在粗粒度视觉任务上表现优异,但其在细粒度视觉识别(FGVR)上仍面临挑战。将通用型MLLMs适配至FGVR通常需要大量标注数据,而获取此类数据的成本极高,导致其与专用于判别任务的对比式CLIP模型存在显著性能差距。此外,MLLMs倾向于对已见子类别过拟合,而对未见子类别的泛化能力较差。为应对这些挑战,我们提出Fine-R1——一种通过R1式训练框架专为FGVR设计的MLLM:(1)思维链监督微调阶段,构建包含“视觉分析、候选子类别、比较与预测”推理链的高质量FGVR CoT数据集,将模型转化为强开放世界分类器;(2)三元组增强策略优化阶段,通过类内增强混合同类锚点图像与正样本轨迹以提升对类内差异的鲁棒性,而类间增强则最大化基于不同子类别图像的条件响应区分度以增强判别能力。仅需4样本训练,Fine-R1在识别已见与未见子类别上均超越现有通用型MLLM、推理型MLLM乃至对比式CLIP模型,展示了在依赖专家标注所有子类别极为困难的知识密集型领域的应用潜力。代码已开源至https://github.com/PKU-ICST-MIPL/FineR1_ICLR2026。