When performing classification tasks with language models, would you prefer having only one highly accurate class or having every class deliver reliable performance? Obviously, a more balanced accuracy among classes better reflects the expectations of the majority of users. Especially for large language models (LLMs), the fact that they achieve a fair overall accuracy by in-context learning (ICL) obscures a large difference in individual class accuracies. In this work, we uncover and tackle language models' imbalance in per-class prediction accuracy by reconceptualizing it as the Contextual Oddity Bias (COBias), and we are the first to engage nonlinear integer programming (NIP) to debias it. Briefly, the proposed COBias metric measures accuracy differences among class pairs, with which we reveal the large per-class accuracy differences exhibited in LLMs of varied scales and families. Then we propose Debiasing as Nonlinear Integer Programming (DNIP) to correct ICL per-class probabilities towards lower COBias and higher overall accuracy. Our optimization objective is directly based on the evaluation scores by COBias and accuracy metrics, which is non-differentiable and solved by the simulated annealing metaheuristic. Evaluations on three LLMs across seven NLP classification tasks show that DNIP simultaneously achieves significant COBias reduction (-27%) and accuracy improvement (+12%) over the conventional ICL approach, suggesting that modeling pairwise class accuracy differences is a direction in pushing forward more accurate, more reliable LLM predictions.
翻译:在使用语言模型执行分类任务时,您更倾向于仅有一个高准确率的类别,还是希望每个类别都能提供可靠的性能?显然,类别间更均衡的准确率更能反映大多数用户的期望。特别是对于大语言模型(LLMs)而言,它们通过上下文学习(ICL)获得公平的整体准确率这一事实,掩盖了各个类别准确率之间的巨大差异。在本工作中,我们通过将语言模型在各类别预测准确率上的不平衡重新概念化为上下文奇异性偏差(COBias),首次采用非线性整数规划(NIP)方法对其进行去偏处理。简而言之,所提出的COBias度量衡量类别对之间的准确率差异,借此我们揭示了不同规模和系列LLMs中表现出的巨大类别间准确率差异。随后,我们提出将去偏问题建模为非线性整数规划(DNIP),以校正ICL的各类别概率,实现更低的COBias和更高的整体准确率。我们的优化目标直接基于COBias和准确率指标的评估分数,该目标不可微分,通过模拟退火元启发式算法求解。在七个NLP分类任务上对三种LLMs的评估表明,与传统ICL方法相比,DNIP同时实现了显著的COBias降低(-27%)和准确率提升(+12%),这表明建模类别对间的准确率差异是推动LLM预测更准确、更可靠的一个重要方向。