Despite the recent success of image-text contrastive models like CLIP and SigLIP, these models often struggle with vision-centric tasks that demand high-fidelity image understanding, such as counting, depth estimation, and fine-grained object recognition. These models, by performing language alignment, tend to prioritize high-level semantics over visual understanding, weakening their image understanding. On the other hand, vision-focused models are great at processing visual information but struggle to understand language, limiting their flexibility for language-driven tasks. In this work, we introduce TULIP, an open-source, drop-in replacement for existing CLIP-like models. Our method leverages generative data augmentation, enhanced image-image and text-text contrastive learning, and image/text reconstruction regularization to learn fine-grained visual features while preserving global semantic alignment. Our approach, scaling to over 1B parameters, outperforms existing state-of-the-art (SOTA) models across multiple benchmarks, establishing a new SOTA zero-shot performance on ImageNet-1K, delivering up to a $2\times$ enhancement over SigLIP on RxRx1 in linear probing for few-shot classification, and improving vision-language models, achieving over $3\times$ higher scores than SigLIP on MMVP. Our code/checkpoints are available at https://tulip-berkeley.github.io
翻译:尽管近期如CLIP和SigLIP等图像-文本对比模型取得了成功,但这些模型在处理需要高保真图像理解的视觉中心任务时仍面临困难,例如计数、深度估计和细粒度物体识别。这些模型通过执行语言对齐,往往优先考虑高层语义而非视觉理解,从而削弱了其图像理解能力。另一方面,专注于视觉的模型擅长处理视觉信息,但难以理解语言,这限制了其在语言驱动任务中的灵活性。在本研究中,我们提出了TULIP,一个开源的、可直接替代现有类CLIP模型的方案。我们的方法利用生成式数据增强、增强的图像-图像和文本-文本对比学习,以及图像/文本重建正则化,以学习细粒度视觉特征,同时保持全局语义对齐。我们的方法可扩展至超过10亿参数,在多个基准测试中超越了现有最先进模型,在ImageNet-1K上实现了新的零样本性能最优,在RxRx1的少样本分类线性探测中比SigLIP提升高达$2\times$,并改进了视觉-语言模型,在MMVP上比SigLIP获得超过$3\times$的分数提升。我们的代码/检查点可在https://tulip-berkeley.github.io获取。