Large language models (LLMs) have shown remarkable capability in numerous tasks and applications. However, fine-tuning LLMs using high-quality datasets under external supervision remains prohibitively expensive. In response, LLM self-improvement approaches have been vibrantly developed recently. The typical paradigm of LLM self-improvement involves training LLM on self-generated data, part of which may be detrimental and should be filtered out due to the unstable data quality. While current works primarily employs filtering strategies based on answer correctness, in this paper, we demonstrate that filtering out correct but with high distribution shift extent (DSE) samples could also benefit the results of self-improvement. Given that the actual sample distribution is usually inaccessible, we propose a new metric called DS weight to approximate DSE, inspired by the Importance Weighting methods. Consequently, we integrate DS weight with self-consistency to comprehensively filter the self-generated samples and fine-tune the language model. Experiments show that with only a tiny valid set (up to 5\% size of the training set) to compute DS weight, our approach can notably promote the reasoning ability of current LLM self-improvement methods. The resulting performance is on par with methods that rely on external supervision from pre-trained reward models.
翻译:大语言模型(LLMs)已在众多任务与应用中展现出卓越能力。然而,在外部监督下使用高质量数据集对LLMs进行微调的成本依然极其高昂。为此,LLM自我提升方法近期得到了蓬勃发展。典型的LLM自我提升范式是在模型自生成数据上进行训练,但由于数据质量不稳定,其中部分数据可能有害并应被过滤。尽管当前工作主要采用基于答案正确性的过滤策略,本文论证了过滤掉正确但具有高分布偏移程度(DSE)的样本同样有利于自我提升的效果。鉴于实际样本分布通常难以获取,受重要性加权方法启发,我们提出一种称为DS权重的新度量来近似DSE。进而,我们将DS权重与自洽性相结合,以全面过滤自生成样本并对语言模型进行微调。实验表明,仅需使用极小的验证集(至多占训练集规模的5%)计算DS权重,我们的方法便能显著提升现有LLM自我提升方法的推理能力。最终性能可与依赖预训练奖励模型提供外部监督的方法相媲美。