Large language models (LLMs) demonstrate remarkable performance on many NLP tasks, yet often exhibit order dependence: simply reordering semantically identical tokens (e.g., answer choices in multiple-choice questions) can lead to inconsistent predictions. Recent work proposes Set-Based Prompting (SBP) as a way to remove order information from designated token subsets, thereby mitigating positional biases. However, applying SBP on base models induces an out-of-distribution input format, which can degrade in-distribution performance. We introduce a fine-tuning strategy that integrates SBP into the training process, "pulling" these set-formatted prompts closer to the model's training manifold. We show that SBP can be incorporated into a model via fine-tuning. Our experiments on in-distribution (MMLU) and out-of-distribution (CSQA, ARC Challenge) multiple-choice tasks show that SBP fine-tuning significantly improves accuracy and robustness to answer-order permutations, all while preserving broader language modeling capabilities. We discuss the broader implications of order-invariant modeling and outline future directions for building fairer, more consistent LLMs.
翻译:大型语言模型(LLM)在众多自然语言处理任务中展现出卓越性能,却常表现出顺序依赖性:仅对语义相同的词元(如多项选择题中的选项)进行重排序,就可能导致预测结果不一致。近期研究提出基于集合的提示(SBP)方法,通过消除指定词元子集中的顺序信息来缓解位置偏差。然而,在基础模型上直接应用SBP会引入分布外输入格式,可能损害模型在分布内数据上的性能。本文提出一种将SBP融入训练过程的微调策略,使这些集合格式的提示更贴近模型的训练流形。我们证明SBP可通过微调整合到模型中。在分布内(MMLU)与分布外(CSQA、ARC挑战赛)多项选择任务上的实验表明:SBP微调能显著提升模型准确率与答案顺序排列的鲁棒性,同时保持广泛的语言建模能力。我们探讨了顺序无关建模的深层意义,并为构建更公平、更一致的大型语言模型指明未来研究方向。