The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.
翻译:自然语言处理领域近期在基于大规模数据开展模型预训练方面取得的突破,为计算机视觉领域构建类似的基础模型开辟了道路。这类模型通过生成通用视觉特征(即无需微调即可跨图像分布和任务使用的特征),可极大简化图像在任何系统中的运用。本研究表明,若在足够多样化来源的精选数据上进行训练,现有预训练方法(尤其是自监督方法)能够生成此类特征。我们重新审视现有方法,整合不同技术以在数据和模型规模上扩展预训练规模。多数技术贡献旨在加速和稳定大规模训练过程。在数据方面,我们提出自动化流程构建专用、多样化且经筛选的图像数据集,而非自监督文献中常用的未筛选数据。在模型方面,我们训练了包含10亿参数的ViT模型(Dosovitskiy等人,2020),并通过知识蒸馏将其压缩为一系列更小的模型,这些模型在图像级和像素级的多数基准测试中超越了现有最优通用特征——OpenCLIP(Ilharco等人,2021)。