In-Context Learning (ICL) allows Large Language Models (LLMs) to adapt to new tasks with just a few examples, but their predictions often suffer from systematic biases, leading to unstable performances in classification. While calibration techniques are proposed to mitigate these biases, we show that, in the logit space, many of these methods are equivalent to merely shifting the LLM's decision boundary without having the ability to alter its orientation. This proves inadequate when biases cause the LLM to be severely misdirected. To address these limitations and provide a unifying framework, we propose Supervised Calibration (SC), a loss-minimization based framework which learns an optimal, per-class affine transformation of the LLM's predictive probabilities in the logit space without requiring external data beyond the context. By using a more expressive functional class, SC not only subsumes many existing calibration methods in ICL as special cases, but also enables the ability to alter and even completely reverse the orientation of the LLM's decision boundary. Furthermore, SC's loss-based nature facilitates the seamless integration of two purpose-built regularization techniques: context-invariance and directional trust-region. The former is designed to tackle the instability issue in ICL, while the latter controls the degree of calibration. Finally, SC delivers state-of-the-art performance over calibration baselines in the 4-shot, 8-shot, and 16-shot settings across all nine datasets for Mistral-7B-Instruct-v0.3, LLaMA-2-7B-chat, and Qwen2-7B-Instruct.
翻译:上下文学习(ICL)使得大语言模型(LLMs)仅需少量示例即可适应新任务,但其预测常存在系统性偏差,导致分类性能不稳定。虽然已有校准技术被提出以缓解这些偏差,但我们证明在logit空间中,许多此类方法仅等价于平移LLM的决策边界,而无法改变其方向。当偏差导致LLM严重误判时,这种方法显然不足。为突破这些限制并建立统一框架,我们提出监督校准(SC),这是一个基于损失最小化的框架,它仅利用上下文信息(无需外部数据)即可在logit空间中学习针对每个类别的最优仿射变换来调整LLM的预测概率。通过采用更具表达力的函数类别,SC不仅将ICL中多种现有校准方法纳入为特例,还能改变甚至完全翻转LLM决策边界的方向。此外,SC基于损失的本质便于无缝集成两种定制化的正则化技术:上下文不变性与方向性信任区域。前者旨在解决ICL中的不稳定性问题,后者则用于控制校准程度。最终,在Mistral-7B-Instruct-v0.3、LLaMA-2-7B-chat和Qwen2-7B-Instruct模型上,针对全部九个数据集的4样本、8样本和16样本设置,SC均超越了现有校准基线,实现了最先进的性能。