In reasoning tasks, even a minor error can cascade into inaccurate results, leading to suboptimal performance of large language models in such domains. Earlier fine-tuning approaches sought to mitigate this by leveraging more precise supervisory signals from human labeling, larger models, or self-sampling, although at a high cost. Conversely, we develop a method that avoids external resources, relying instead on introducing perturbations to the input. Our training approach randomly masks certain tokens within the chain of thought, a technique we found to be particularly effective for reasoning tasks. When applied to fine-tuning with GSM8K on Llama-2-7B, this method achieved a 5\% improvement in GSM8K accuracy and a 10\% improvement in GSM-IC accuracy over standard supervised fine-tuning with a few codes modified. Furthermore, it is complementary to existing methods. When integrated with related explicit data augmentation methods, it leads to improvements across five datasets of various augmentation methods, as well as two different base models. We further investigate the mechanisms behind this improvement through case studies and quantitative analysis, suggesting that our approach may provide superior support for the model in capturing long-distance dependencies, especially those related to questions. This enhancement could deepen understanding of the premises in questions and prior steps. Our code is available at Github.
翻译:在推理任务中,即使微小的错误也可能导致结果不准确,造成大型语言模型在此类领域表现欠佳。早期的微调方法试图通过利用人工标注、更大模型或自采样提供的更精确监督信号来缓解此问题,但代价高昂。相比之下,我们开发了一种无需外部资源的方法,转而通过对输入引入扰动来实现。我们的训练方法随机掩蔽思维链中的部分标记,我们发现该技术对推理任务尤为有效。将此方法应用于Llama-2-7B在GSM8K数据集上的微调时,仅需修改少量代码,其GSM8K准确率较标准监督微调提升5%,GSM-IC准确率提升10%。此外,该方法与现有方法具有互补性。当与相关的显式数据增强方法结合时,可在五种不同增强方法的数据集以及两种不同基础模型上均实现性能提升。我们通过案例研究和定量分析进一步探究了这种改进背后的机制,表明我们的方法可能为模型捕捉长距离依赖关系(特别是与问题相关的依赖)提供更优支持。这种增强可能深化模型对问题前提及先前步骤的理解。我们的代码已在Github开源。