We investigate to what degree existing LLMs encode abstract linguistic information in Italian in a multi-task setting. We exploit curated synthetic data on a large scale -- several Blackbird Language Matrices (BLMs) problems in Italian -- and use them to study how sentence representations built using pre-trained language models encode specific syntactic and semantic information. We use a two-level architecture to model separately a compression of the sentence embeddings into a representation that contains relevant information for a task, and a BLM task. We then investigate whether we can obtain compressed sentence representations that encode syntactic and semantic information relevant to several BLM tasks. While we expected that the sentence structure -- in terms of sequence of phrases/chunks -- and chunk properties could be shared across tasks, performance and error analysis show that the clues for the different tasks are encoded in different manners in the sentence embeddings, suggesting that abstract linguistic notions such as constituents or thematic roles does not seem to be present in the pretrained sentence embeddings.
翻译:本研究在多任务环境下,探讨现有大型语言模型对意大利语抽象语言学信息的编码程度。我们利用大规模精选合成数据——意大利语中的多个Blackbird语言矩阵问题——通过预训练语言模型构建的句向量表示,系统研究其如何编码特定句法与语义信息。我们采用双层架构分别建模:第一层将句向量压缩为包含任务相关信息的表示,第二层处理BLM任务。随后探究能否获得编码多种BLM任务所需句法语义信息的压缩句向量表示。虽然我们预期句子结构(短语/语块序列)和语块属性可能在不同任务间共享,但性能分析与错误检验表明:不同任务的线索以相异方式编码于句向量中,这暗示预训练句向量似乎并未包含诸如句法成分或语义角色之类的抽象语言学概念。