Agglomerative models have recently emerged as a powerful approach to training vision foundation models, leveraging multi-teacher distillation from existing models such as CLIP, DINO, and SAM. This strategy enables the efficient creation of robust models, combining the strengths of individual teachers while significantly reducing computational and resource demands. In this paper, we thoroughly analyze state-of-the-art agglomerative models, identifying critical challenges including resolution mode shifts, teacher imbalance, idiosyncratic teacher artifacts, and an excessive number of output tokens. To address these issues, we propose several novel solutions: multi-resolution training, mosaic augmentation, and improved balancing of teacher loss functions. Specifically, in the context of Vision Language Models, we introduce a token compression technique to maintain high-resolution information within a fixed token count. We release our top-performing models, available in multiple scales (-B, -L, -H, and -g), alongside inference code and pretrained weights.
翻译:聚合式模型近期已成为训练视觉基础模型的一种强大方法,其通过从现有模型(如CLIP、DINO和SAM)进行多教师蒸馏来实现。该策略能够高效构建鲁棒模型,融合各教师模型的优势,同时显著降低计算与资源需求。本文深入分析了最先进的聚合式模型,识别出若干关键挑战,包括分辨率模式偏移、教师模型不平衡、教师特异性伪影以及输出令牌数量过多。为解决这些问题,我们提出了多项创新方案:多分辨率训练、马赛克数据增强以及改进的教师损失函数平衡方法。特别地,针对视觉语言模型,我们引入了一种令牌压缩技术,以在固定令牌数量内保持高分辨率信息。我们发布了性能最优的模型(提供多种规模版本:-B、-L、-H与-g),并同步公开了推理代码与预训练权重。