Despite the widespread use of LLMs due to their superior performance in various tasks, their high computational costs often lead potential users to opt for the pretraining-finetuning pipeline. However, biases prevalent in manually constructed datasets can introduce spurious correlations between tokens and labels, creating so-called shortcuts and hindering the generalizability of fine-tuned models. Existing debiasing methods often rely on prior knowledge of specific dataset biases, which is challenging to acquire a priori. We propose RAZOR (Rewriting And Zero-bias Optimization Refinement), a novel, unsupervised, and data-focused debiasing approach based on text rewriting for shortcut mitigation. RAZOR leverages LLMs to iteratively rewrite potentially biased text segments by replacing them with heuristically selected alternatives in a shortcut space defined by token statistics and positional information. This process aims to align surface-level text features more closely with diverse label distributions, thereby promoting the learning of genuine linguistic patterns. Compared with unsupervised SoTA models, RAZOR improves by 3.5% on the FEVER and 6.5% on MNLI and SNLI datasets according to the F1 score. Additionally, RAZOR effectively mitigates specific known biases, reducing bias-related terms by x2 without requiring prior bias information, a result that is on par with SoTA models that leverage prior information. Our work prioritizes data manipulation over architectural modifications, emphasizing the pivotal role of data quality in enhancing model performance and fairness. This research contributes to developing more robust evaluation benchmarks for debiasing methods by incorporating metrics for bias reduction and overall model efficacy.
翻译:尽管大型语言模型因其在各种任务中的卓越性能而得到广泛应用,但其高昂的计算成本常常导致潜在用户选择预训练-微调流程。然而,人工构建数据集中普遍存在的偏差可能会在词元与标签之间引入虚假相关性,形成所谓的捷径,从而阻碍微调模型的泛化能力。现有的去偏方法通常依赖于对特定数据集偏差的先验知识,而这在事先难以获取。我们提出了RAZOR(Rewriting And Zero-bias Optimization Refinement),一种新颖的、无监督的、以数据为中心的基于文本重写的去偏方法,用于缓解捷径问题。RAZOR利用大型语言模型,在由词元统计和位置信息定义的捷径空间中,通过启发式选择的替代项迭代重写可能存在偏差的文本片段。该过程旨在使表层文本特征更紧密地与多样化的标签分布对齐,从而促进对真实语言模式的学习。与无监督的先进模型相比,根据F1分数,RAZOR在FEVER数据集上提升了3.5%,在MNLI和SNLI数据集上提升了6.5%。此外,RAZOR有效缓解了特定的已知偏差,在无需先验偏差信息的情况下,将偏差相关项减少了x2倍,这一结果与利用先验信息的先进模型相当。我们的工作优先考虑数据操作而非架构修改,强调了数据质量在提升模型性能和公平性方面的关键作用。本研究通过纳入偏差减少和整体模型效能的度量指标,有助于开发更鲁棒的去偏方法评估基准。