Although large language models (LLMs) have demonstrated remarkable performance, the lack of transparency in their inference logic raises concerns about their trustworthiness. To gain a better understanding of LLMs, we conduct a detailed analysis of the operations of attention heads and aim to better understand the in-context learning of LLMs. Specifically, we investigate whether attention heads encode two types of relationships between tokens present in natural languages: the syntactic dependency parsed from sentences and the relation within knowledge graphs. We find that certain attention heads exhibit a pattern where, when attending to head tokens, they recall tail tokens and increase the output logits of those tail tokens. More crucially, the formulation of such semantic induction heads has a close correlation with the emergence of the in-context learning ability of language models. The study of semantic attention heads advances our understanding of the intricate operations of attention heads in transformers, and further provides new insights into the in-context learning of LLMs.
翻译:摘要:尽管大型语言模型(LLM)展现出了卓越的性能,但其推理逻辑缺乏透明度,引发了对其可信度的担忧。为了更深入地理解LLM,我们对注意力头的运作进行了详细分析,并旨在更好地理解LLM的上下文学习机制。具体而言,我们探究了注意力头是否编码了自然语言中存在的两类词元间关系:从句子中解析出的句法依存关系以及知识图谱内的实体关系。我们发现,某些注意力头表现出一种模式:当关注头部词元时,它们会回忆尾部词元并提升这些尾部词元的输出逻辑概率。更重要的是,这类语义归纳头的形成与语言模型上下文学习能力的涌现密切相关。对语义注意力头的研究增进了我们对Transformer中注意力头复杂运作的理解,并进一步为LLM的上下文学习提供了新见解。