Power-law scaling indicates that large-scale training with uniform sampling is prohibitively slow. Active learning methods aim to increase data efficiency by prioritizing learning on the most relevant examples. Despite their appeal, these methods have yet to be widely adopted since no one algorithm has been shown to a) generalize across models and tasks b) scale to large datasets and c) yield overall FLOP savings when accounting for the overhead of data selection. In this work we propose a method which satisfies these three properties, leveraging small, cheap proxy models to estimate "learnability" scores for datapoints, which are used to prioritize data for the training of much larger models. As a result, our models require 46% and 51% fewer training updates and up to 25% less total computation to reach the same performance as uniformly trained visual classifiers on JFT and multimodal models on ALIGN. Finally, we find our data-prioritization scheme to be complementary with recent data-curation and learning objectives, yielding a new state-of-the-art in several multimodal transfer tasks.
翻译:幂律缩放表明,采用均匀采样的大规模训练效率极低。主动学习方法旨在通过优先学习最相关的样本来提高数据效率。尽管这些方法颇具吸引力,但由于尚未有算法能够满足以下三个条件:a) 跨模型和任务泛化,b) 扩展至大规模数据集,以及c) 在考虑数据选择开销的情况下实现总体FLOP节省,因此它们尚未被广泛采用。在本工作中,我们提出一种满足这三个属性的方法,利用小型廉价代理模型估算数据点的“可学习性”分数,从而优先为更大模型的训练选择数据。结果表明,相较于均匀训练的视觉分类器(基于JFT)和多模态模型(基于ALIGN),我们的模型在达到相同性能时分别需要减少46%和51%的训练更新次数,以及最多减少25%的总计算量。最后,我们发现我们的数据优先级排序方案与最新数据整理和学习目标互补,在多模态迁移任务中实现了多项新最先进水平。