We study how syntactic and semantic information is encoded in inner layer representations of Large Language Models (LLMs), focusing on the very large DeepSeek-V3. We find that, by averaging hidden-representation vectors of sentences sharing syntactic structure or meaning, we obtain vectors that capture a significant proportion of the syntactic and semantic information contained in the representations. In particular, subtracting these syntactic and semantic ``centroids'' from sentence vectors strongly affects their similarity with syntactically and semantically matched sentences, respectively, suggesting that syntax and semantics are, at least partially, linearly encoded. We also find that the cross-layer encoding profiles of syntax and semantics are different, and that the two signals can to some extent be decoupled, suggesting differential encoding of these two types of linguistic information in LLM representations.
翻译:本研究探讨了句法信息与语义信息在大语言模型内部层表示中的编码方式,重点关注超大规模模型DeepSeek-V3。我们发现,通过对具有相同句法结构或语义的句子的隐藏表示向量进行平均,可以获得捕获表示中所含句法与语义信息显著部分的向量。具体而言,从句子向量中减去这些句法与语义“质心”会分别强烈影响其与句法匹配及语义匹配句子的相似度,这表明句法与语义至少是部分线性编码的。我们还发现,句法与语义的跨层编码模式存在差异,且这两种信号在某种程度上可以解耦,这提示大语言模型表示中对这两类语言信息存在差异化编码。