Training transformer models requires substantial GPU compute and memory resources. In homogeneous clusters, distributed strategies allocate resources evenly, but this approach is inefficient for heterogeneous clusters, where GPUs differ in power and memory. As high-end GPUs are costly and limited in availability, heterogeneous clusters with diverse GPU types are becoming more common. Existing methods attempt to balance compute across GPUs based on capacity but often underutilize compute due to memory constraints. We present Cephalo, a system that optimizes compute and memory usage by decoupling compute distribution from training state assignment. %Compared to state-of-the-art methods, Cephalo achieves significantly higher training throughput while supporting larger models and batch sizes. Cephalo outperforms state-of-the-art methods by achieving significantly higher training throughput while supporting larger models and batch sizes.
翻译:训练Transformer模型需要大量的GPU计算和内存资源。在均匀集群中,分布式策略会均匀分配资源,但这种方法对于GPU算力和内存存在差异的异构集群而言效率低下。由于高端GPU成本高昂且供应有限,由不同类型GPU组成的异构集群正变得越来越普遍。现有方法试图根据GPU能力平衡计算负载,但常因内存限制导致计算资源利用不足。我们提出了Cephalo系统,该系统通过将计算分布与训练状态分配解耦,优化计算和内存使用。与现有先进方法相比,Cephalo在支持更大模型和批处理规模的同时,实现了显著更高的训练吞吐量。