The rapid proliferation of large language models has driven the need for efficient GPU training clusters. However, it is challenging due to the frequent occurrence of training anomalies. Since existing diagnostic tools are narrowly tailored to specific issues, there are gaps in their ability to address anomalies spanning the entire training stack. In response, we introduce Flare, a diagnostic framework designed for distributed LLM training at scale. Flare first integrates a lightweight tracing daemon for full-stack and backend-extensible tracing. Additionally, it features a diagnostic engine that automatically diagnoses anomalies, with a focus on performance regressions. The deployment of Flare across 6,000 GPUs has demonstrated significant improvements in pinpointing deficiencies in real-world scenarios, with continuous operation for over eight months.
翻译:随着大语言模型的快速发展,对高效GPU训练集群的需求日益增长。然而,频繁出现的训练异常使得这一目标极具挑战性。由于现有诊断工具通常仅针对特定问题设计,它们在处理覆盖整个训练技术栈的异常方面存在能力缺口。为此,我们提出了Flare,一个专为大规模分布式LLM训练设计的诊断框架。Flare首先集成了一个轻量级追踪守护进程,支持全栈及后端可扩展的追踪。此外,它配备了一个诊断引擎,能够自动诊断异常,尤其侧重于性能衰退问题。Flare已在超过6000块GPU上部署,实际运行超过八个月,在准确定位现实场景中的缺陷方面展现出显著成效。