Large Language Models (LLMs) have demonstrated remarkable performance in various natural language processing tasks. However, the training of these models is computationally intensive and susceptible to faults, particularly in the attention mechanism, which is a critical component of transformer-based LLMs. In this paper, we investigate the impact of faults on LLM training, focusing on INF, NaN, and near-INF values in the computation results with systematic fault injection experiments. We observe the propagation patterns of these errors, which can trigger non-trainable states in the model and disrupt training, forcing the procedure to load from checkpoints.To mitigate the impact of these faults, we propose ATTNChecker, the first Algorithm-Based Fault Tolerance (ABFT) technique tailored for the attention mechanism in LLMs. ATTNChecker is designed based on fault propagation patterns of LLM and incorporates performance optimization to adapt to both system reliability and model vulnerability while providing lightweight protection for fast LLM training. Evaluations on four LLMs show that ATTNChecker on average incurs on average 7% overhead on training while detecting and correcting all extreme errors. Compared with the state-of-the-art checkpoint/restore approach, ATTNChecker reduces recovery overhead by up to 49x.
翻译:大规模语言模型(LLMs)在各种自然语言处理任务中展现出卓越的性能。然而,这些模型的训练过程计算密集且易受故障影响,尤其是在注意力机制中——这是基于Transformer的LLMs的关键组件。本文通过系统性的故障注入实验,研究了故障对LLM训练的影响,重点关注计算结果中的INF、NaN及近INF值。我们观察到这些错误的传播模式会触发模型的不可训练状态并中断训练流程,迫使训练过程必须从检查点重新加载。为减轻此类故障的影响,我们提出了ATTNChecker——首个专为LLMs注意力机制设计的基于算法的容错技术。ATTNChecker基于LLM的故障传播模式设计,融合了性能优化以适应系统可靠性与模型脆弱性,同时为快速LLM训练提供轻量级保护。在四个LLM上的评估表明,ATTNChecker在检测并修正所有极端错误的同时,平均仅产生7%的训练开销。与最先进的检查点/恢复方法相比,ATTNChecker将恢复开销降低了高达49倍。