We study last-layer outlier dimensions, i.e.dimensions that display extreme activations for the majority of inputs. We show that outlier dimensions arise in many different modern language models, and trace their function back to the heuristic of constantly predicting frequent words. We further show how a model can block this heuristic when it is not contextually appropriate, by assigning a counterbalancing weight mass to the remaining dimensions, and we investigate which model parameters boost outlier dimensions and when they arise during training. We conclude that outlier dimensions are a specialized mechanism discovered by many distinct models to implement a useful token prediction heuristic.
翻译:我们研究了最后一层的离群维度,即对大多数输入呈现极端激活值的维度。我们发现离群维度出现在许多不同的现代语言模型中,并将其功能追溯到持续预测高频词汇的启发式策略。我们进一步展示了模型如何通过为其余维度分配平衡性的权重质量,在上下文不适用时阻断该启发式策略,并探究了哪些模型参数会增强离群维度及其在训练过程中何时出现。我们的结论是:离群维度是众多不同模型发现的一种专门机制,用于实现一种有效的词元预测启发式策略。