Large language models (LLMs) often necessitate extensive labeled datasets and training compute to achieve impressive performance across downstream tasks. This paper explores a self-training paradigm, where the LLM autonomously curates its own labels and selectively trains on unknown data samples identified through a reference-free consistency method. Empirical evaluations demonstrate significant improvements in reducing hallucination in generation across multiple subjects. Furthermore, the selective training framework mitigates catastrophic forgetting in out-of-distribution benchmarks, addressing a critical limitation in training LLMs. Our findings suggest that such an approach can substantially reduce the dependency on large labeled datasets, paving the way for more scalable and cost-effective language model training.
翻译:大语言模型(LLMs)通常需要大量标注数据集和训练算力,才能在下游任务中取得优异性能。本文探索了一种自训练范式,其中LLM通过无参考一致性方法自主筛选标注数据,并针对识别出的未知数据样本进行选择性训练。实证评估表明,该方法在减少多学科生成任务中的幻觉现象方面取得了显著改进。此外,选择性训练框架缓解了分布外基准测试中的灾难性遗忘问题,解决了训练LLMs的一个关键瓶颈。我们的研究结果表明,这种方法能够大幅降低对大规模标注数据集的依赖,为更具可扩展性和成本效益的语言模型训练开辟了新路径。