Power-law scaling indicates that large-scale training with uniform sampling is prohibitively slow. Active learning methods aim to increase data efficiency by prioritizing learning on the most relevant examples. Despite their appeal, these methods have yet to be widely adopted since no one algorithm has been shown to a) generalize across models and tasks b) scale to large datasets and c) yield overall FLOP savings when accounting for the overhead of data selection. In this work we propose a method which satisfies these three properties, leveraging small, cheap proxy models to estimate "learnability" scores for datapoints, which are used to prioritize data for the training of much larger models. As a result, our models require 46% and 51% fewer training updates and up to 25% less total computation to reach the same performance as uniformly trained visual classifiers on JFT and multimodal models on ALIGN. Finally, we find our data-prioritization scheme to be complementary with recent data-curation and learning objectives, yielding a new state-of-the-art in several multimodal transfer tasks.
翻译:幂律缩放表明,采用均匀采样的大规模训练过程缓慢得令人望而却步。主动学习方法旨在通过优先学习最相关的样本来提高数据效率。尽管这些方法颇具吸引力,但由于尚未有任何算法被证明能够同时满足以下三个条件:a) 在不同模型和任务间具有泛化能力;b) 可扩展至大型数据集;c) 在考虑数据选择开销后,能实现总体浮点运算次数的节省,因此它们尚未被广泛采用。在本研究中,我们提出了一种满足这三个条件的方法,该方法利用小型、廉价的代理模型来估计数据点的“可学习性”分数,并据此为训练更大模型的数据进行优先级排序。结果表明,我们的模型在JFT数据集上达到与均匀训练的视觉分类器相同的性能时,所需的训练更新次数减少了46%,在ALIGN数据集上训练多模态模型时减少了51%,并且总计算量最多可减少25%。最后,我们发现我们的数据优先级排序方案与近期的数据筛选和学习目标具有互补性,从而在多项多模态迁移任务中创造了新的最佳性能。