When selecting data for training large-scale models, standard practice is to filter for examples that match human notions of data quality. Such filtering yields qualitatively clean datapoints that intuitively should improve model behavior. However, in practice the opposite can often happen: we find that selecting according to similarity with "high quality" data sources may not increase (and can even hurt) performance compared to randomly selecting data. To develop better methods for selecting data, we start by framing dataset selection as an optimization problem that we can directly solve for: given target tasks, a learning algorithm, and candidate data, select the subset that maximizes model performance. This framework thus avoids handpicked notions of data quality, and instead models explicitly how the learning process uses train datapoints to predict on the target tasks. Our resulting method greatly improves language model (LM) performance on both pre-specified tasks and previously unseen tasks. Specifically, choosing target tasks representative of standard LM problems and evaluating on diverse held-out benchmarks, our selected datasets provide a 2x compute multiplier over baseline methods.
翻译:在选择用于训练大规模模型的数据时,标准实践是筛选符合人类数据质量观念的样本。这种筛选能够获得直观上应能改善模型行为的干净数据点。然而在实践中往往出现相反情况:我们发现与随机选择数据相比,依据与"高质量"数据源的相似度进行选择可能不会提升(甚至可能损害)模型性能。为开发更优的数据选择方法,我们首先将数据集选择定义为可直接求解的优化问题:给定目标任务、学习算法和候选数据,选择能最大化模型性能的子集。该框架摒弃了人工设定的数据质量概念,转而显式建模学习过程如何利用训练数据点对目标任务进行预测。我们的方法显著提升了语言模型(LM)在预设任务和未见任务上的表现。具体而言,通过选择代表标准LM问题的目标任务,并在多样化的保留基准上进行评估,所选数据集相较于基线方法带来了2倍的计算效率提升。