This paper studies in-context learning (ICL) by decomposing the output of large language models into the individual contributions of attention heads and MLPs (components). We observe curious components: good-performing ones that individually do well on a classification task, even when the model performs poorly; bad-performing ones that do much worse than chance; and label-biased components that always predict the same label. We find that component accuracies are well-correlated across different demonstration sets and perturbations of prompt templates, even when the full-model accuracy varies greatly. Based on our findings, we propose component reweighting, which learns to linearly re-scale the component activations from a few labeled examples. Given 24 labeled examples, our method improves by an average of 6.0% accuracy points over 24-shot ICL across 8 tasks on Llama-2-7B. Overall, this paper both enriches our understanding of ICL and provides a practical method for improvement by examining model internals.
翻译:本文通过将大型语言模型的输出分解为注意力头和MLP(组件)的个体贡献来研究上下文学习(ICL)。我们观察到一些有趣的组件:性能良好的组件在分类任务上单独表现优异,即使模型整体表现不佳;性能较差的组件表现远低于随机水平;以及标签偏置组件总是预测相同标签。我们发现,即使完整模型准确率变化很大,不同演示集和提示模板扰动下的组件准确率仍高度相关。基于这些发现,我们提出组件重加权方法,该方法通过学习从少量标注示例中线性重新缩放组件激活。给定24个标注示例,我们的方法在Llama-2-7B模型上的8个任务中,相比24样本ICL平均提升6.0%的准确率。总体而言,本文不仅深化了我们对ICL的理解,还通过检视模型内部机制提供了一种实用的改进方法。