Supervised fine-tuning (SFT) plays a crucial role in adapting large language models (LLMs) to specific domains or tasks. However, as demonstrated by empirical experiments, the collected data inevitably contains noise in practical applications, which poses significant challenges to model performance on downstream tasks. Therefore, there is an urgent need for a noise-robust SFT framework to enhance model capabilities in downstream tasks. To address this challenge, we introduce a robust SFT framework (RobustFT) that performs noise detection and relabeling on downstream task data. For noise identification, our approach employs a multi-expert collaborative system with inference-enhanced models to achieve superior noise detection. In the denoising phase, we utilize a context-enhanced strategy, which incorporates the most relevant and confident knowledge followed by careful assessment to generate reliable annotations. Additionally, we introduce an effective data selection mechanism based on response entropy, ensuring only high-quality samples are retained for fine-tuning. Extensive experiments conducted on multiple LLMs across five datasets demonstrate RobustFT's exceptional performance in noisy scenarios.
翻译:监督微调(SFT)在将大语言模型(LLMs)适配到特定领域或任务中起着至关重要的作用。然而,实证实验表明,在实际应用中收集的数据不可避免地包含噪声,这对模型在下游任务上的性能构成了重大挑战。因此,迫切需要一种抗噪声的SFT框架来提升模型在下游任务中的能力。为应对这一挑战,我们引入了一种鲁棒的SFT框架(RobustFT),该框架对下游任务数据进行噪声检测与重标注。在噪声识别方面,我们的方法采用了一个多专家协作系统,结合推理增强模型,以实现卓越的噪声检测。在去噪阶段,我们利用一种上下文增强策略,该策略整合了最相关且置信度最高的知识,随后经过审慎评估以生成可靠的标注。此外,我们引入了一种基于响应熵的有效数据选择机制,确保仅保留高质量样本用于微调。在五个数据集上对多种LLMs进行的广泛实验表明,RobustFT在噪声场景下具有卓越的性能。