We study how syntactic and semantic information is encoded in inner layer representations of Large Language Models (LLMs), focusing on the very large DeepSeek-V3. We find that, by averaging hidden-representation vectors of sentences sharing syntactic structure or meaning, we obtain vectors that capture a significant proportion of the syntactic and semantic information contained in the representations. In particular, subtracting these syntactic and semantic ``centroids'' from sentence vectors strongly affects their similarity with syntactically and semantically matched sentences, respectively, suggesting that syntax and semantics are, at least partially, linearly encoded. We also find that the cross-layer encoding profiles of syntax and semantics are different, and that the two signals can to some extent be decoupled, suggesting differential encoding of these two types of linguistic information in LLM representations.
翻译:本研究探讨了大型语言模型(LLMs)内部层表征中句法与语义信息的编码机制,重点关注超大规模模型DeepSeek-V3。我们发现,通过对具有相同句法结构或语义的句子隐藏表征向量进行平均化处理,所获得的向量能够捕获表征中相当比例的句法与语义信息。特别值得注意的是,从句子向量中减去这些句法与语义"质心"会显著影响其与句法匹配句及语义匹配句之间的相似度,这表明句法与语义至少存在部分线性编码特性。研究还发现,句法与语义的跨层编码模式存在差异,且两种信号在一定程度上可解耦,这揭示了LLM表征中两类语言信息存在差异化编码机制。