State-of-the-art large language and vision models are trained over trillions of tokens that are aggregated from a large variety of sources. As training data collections grow, manually managing the samples becomes time-consuming, tedious, and prone to errors. Yet recent research shows that the data mixture and the order in which samples are visited during training can significantly influence model accuracy. We build and present Mixtera, a data plane for foundation model training that enables users to declaratively express which data samples should be used in which proportion and in which order during training. Mixtera is a centralized, read-only layer that is deployed on top of existing training data collections and can be declaratively queried. It operates independently of the filesystem structure and supports mixtures across arbitrary properties (e.g., language, source dataset) as well as dynamic adjustment of the mixture based on model feedback. We experimentally evaluate Mixtera and show that our implementation does not bottleneck training and scales to 256 GH200 superchips. We demonstrate how Mixtera supports recent advancements in mixing strategies by implementing the proposed Adaptive Data Optimization (ADO) algorithm in the system and evaluating its performance impact. We also explore the role of mixtures for vision-language models.
翻译:当前最先进的大型语言和视觉模型基于从海量多样化来源聚合的数万亿标记进行训练。随着训练数据集的增长,人工管理数据样本变得耗时、繁琐且容易出错。然而最新研究表明,训练过程中的数据混合比例及样本访问顺序会显著影响模型精度。我们设计并实现了Mixtera——一个用于基础模型训练的数据平面,使用户能够以声明式方式指定训练过程中应使用哪些数据样本、以何种比例及顺序进行训练。Mixtera作为集中式只读层部署在现有训练数据集之上,支持声明式查询。该系统独立于文件系统结构运行,支持跨任意属性(如语言、源数据集)的数据混合,并能根据模型反馈动态调整混合策略。我们通过实验评估Mixtera,证明其实现不会成为训练瓶颈,并可扩展至256个GH200超级芯片。我们通过在系统中实现自适应数据优化(ADO)算法并评估其性能影响,展示了Mixtera如何支持混合策略的最新进展。同时,我们还探索了混合策略在视觉语言模型中的作用。