We study how syntactic and semantic information is encoded in inner layer representations of Large Language Models (LLMs), focusing on the very large DeepSeek-V3. We find that, by averaging hidden-representation vectors of sentences sharing syntactic structure or meaning, we obtain vectors that capture a significant proportion of the syntactic and semantic information contained in the representations. In particular, subtracting these syntactic and semantic ``centroids'' from sentence vectors strongly affects their similarity with syntactically and semantically matched sentences, respectively, suggesting that syntax and semantics are, at least partially, linearly encoded. We also find that the cross-layer encoding profiles of syntax and semantics are different, and that the two signals can to some extent be decoupled, suggesting differential encoding of these two types of linguistic information in LLM representations.
翻译:本研究探讨了大型语言模型(LLMs)内部层表征中句法和语义信息的编码机制,重点关注超大规模模型DeepSeek-V3。我们发现,通过对具有相同句法结构或语义的句子隐藏表征向量进行平均,所获得的向量能够捕获表征中相当比例的句法和语义信息。具体而言,从句子向量中减去这些句法和语义"质心"会显著影响其与句法匹配及语义匹配句子的相似度,这表明句法和语义至少在某种程度上是以线性方式编码的。研究还发现,句法与语义的跨层编码模式存在差异,且两种信号在一定程度上可解耦,这提示LLM表征对这两类语言信息存在差异化编码机制。