Scaling laws have been recently employed to derive compute-optimal model size (number of parameters) for a given compute duration. We advance and refine such methods to infer compute-optimal model shapes, such as width and depth, and successfully implement this in vision transformers. Our shape-optimized vision transformer, SoViT, achieves results competitive with models that exceed twice its size, despite being pre-trained with an equivalent amount of compute. For example, SoViT-400m/14 achieves 90.3% fine-tuning accuracy on ILSRCV2012, surpassing the much larger ViT-g/14 and approaching ViT-G/14 under identical settings, with also less than half the inference cost. We conduct a thorough evaluation across multiple tasks, such as image classification, captioning, VQA and zero-shot transfer, demonstrating the effectiveness of our model across a broad range of domains and identifying limitations. Overall, our findings challenge the prevailing approach of blindly scaling up vision models and pave a path for a more informed scaling.
翻译:缩放定律近期被用于推导给定计算时长下的计算最优模型规模(参数数量)。我们推进并改进了此类方法,以推断计算最优的模型形状(如宽度和深度),并成功应用于视觉Transformer中。经过形状优化的视觉Transformer——SoViT,在使用等效计算量进行预训练的情况下,其性能可与规模超其两倍的模型相媲美。例如,SoViT-400m/14在ILSRCV2012数据集上达到90.3%的微调精度,在相同设置下超越了规模更大的ViT-g/14,并接近ViT-G/14,同时推理成本降低一半以上。我们在图像分类、图像描述、视觉问答和零样本迁移等多个任务上进行了全面评估,证明了模型在广泛领域中的有效性,并指出了其局限性。总体而言,我们的研究结果挑战了盲目放大视觉模型的普遍做法,并为更科学的缩放路径奠定了基础。