The mathematical problem-solving capabilities of large language models have become a focal point of research, with growing interests in leveraging self-generated reasoning paths as a promising way to refine and enhance these models. These paths capture step-by-step logical processes while requiring only the correct answer for supervision. The self-training method has been shown to be effective in reasoning tasks while eliminating the need for external models and manual annotations. However, optimizing the use of self-generated data for model training remains an open challenge. In this work, we propose Entropy-Based Adaptive Weighting for Self-Training (EAST), an adaptive weighting strategy designed to prioritize uncertain data during self-training. Specifically, EAST employs a mapping function with a tunable parameter that controls the sharpness of the weighting, assigning higher weights to data where the model exhibits greater uncertainty. This approach guides the model to focus on more informative and challenging examples, thereby enhancing its reasoning ability. We evaluate our approach on GSM8K and MATH benchmarks. Empirical results show that, while the vanilla method yields virtually no improvement (0%) on MATH, EAST achieves around a 1% gain over backbone model. On GSM8K, EAST attains a further 1-2% performance boost compared to the vanilla method.
翻译:大型语言模型的数学问题求解能力已成为研究焦点,利用自生成推理路径作为精炼和增强这些模型的有效途径正受到日益关注。这些路径捕捉了逐步的逻辑推理过程,同时仅需正确答案作为监督信号。自训练方法已在推理任务中被证明是有效的,且无需依赖外部模型或人工标注。然而,如何优化利用自生成数据进行模型训练仍是一个开放性问题。本研究提出基于熵的自适应加权自训练方法,这是一种旨在自训练过程中优先处理不确定性数据的自适应加权策略。具体而言,EAST采用具有可调参数的映射函数来控制加权的锐度,为模型表现出更高不确定性的数据分配更高权重。该方法引导模型聚焦于信息量更大、更具挑战性的样本,从而提升其推理能力。我们在GSM8K和MATH基准测试上评估了该方法。实验结果表明:在MATH数据集上,基础方法几乎未带来提升(0%),而EAST相比骨干模型获得了约1%的性能增益;在GSM8K数据集上,EAST相较于基础方法实现了额外1-2%的性能提升。