Modern frameworks for training large foundation models (LFMs) employ dataloaders in a data-parallel manner, with each loader processing a disjoint subset of training data. When preparing data for LFM training that originates from multiple, distinct sources, two fundamental challenges arise. First, due to the quadratic computational complexity of the attention operator, the non-uniform sample distribution over data-parallel ranks leads to significant workload imbalance among dataloaders, degrading the training efficiency. Second, supporting diverse data sources requires per-dataset file access states that are redundantly replicated across parallel loaders, consuming excessive memory. This also hinders dynamic data mixing (e.g., curriculum learning) and causes redundant access/memory overhead in hybrid parallelism. We present MegaScale-Data, an industrial-grade distributed data loading architecture for multisource LFMs training, with three key innovations: (1) Disaggregated data preprocessing via role-specific actors (Source Loaders/Data Constructors) to eliminate source and parallelism redundant data access and ensure multisource scalability. (2) Centralized and declarative data plane for load-time multisource orchestration, such as long-short context, multimodality, and curriculum learning. (3) Multi-level auto-partitioning and scaling mechanism for source loaders under heterogeneous preprocessing costs. We also contribute our designs and operational experience in deployment and fault tolerance. MegaScale-Data achieves up to: (1) 4.5x end-to-end training throughput improvement, and (2) 13.5x reduction in CPU memory usage.
翻译:现代大规模基础模型(LFM)训练框架通常以数据并行的方式使用数据加载器,每个加载器处理训练数据的一个不相交子集。当为源自多个不同数据源的LFM训练准备数据时,会出现两个基本挑战。首先,由于注意力算子的二次计算复杂度,数据并行进程间非均匀的样本分布会导致数据加载器之间显著的工作负载不平衡,从而降低训练效率。其次,支持多样化的数据源需要为每个数据集维护文件访问状态,这些状态在并行加载器之间被冗余复制,消耗了过多的内存。这也阻碍了动态数据混合(例如课程学习),并在混合并行中导致冗余的访问/内存开销。我们提出了MegaScale-Data,一个面向多源LFM训练的工业级分布式数据加载架构,包含三项关键创新:(1) 通过角色特定的执行器(源加载器/数据构造器)进行解耦的数据预处理,以消除源和并行化带来的冗余数据访问,并确保多源可扩展性。(2) 集中式、声明式的数据平面,用于加载时多源编排,例如长-短上下文、多模态和课程学习。(3) 针对异构预处理成本的源加载器,采用多级自动分区与扩展机制。我们还贡献了在部署和容错方面的设计及运维经验。MegaScale-Data实现了高达:(1) 4.5倍的端到端训练吞吐量提升,以及(2) 13.5倍的CPU内存使用量减少。