In this paper, our goal is to investigate to what degree multilingual pretrained language models capture cross-linguistically valid abstract linguistic representations. We take the approach of developing curated synthetic data on a large scale, with specific properties, and using them to study sentence representations built using pretrained language models. We use a new multiple-choice task and datasets, Blackbird Language Matrices (BLMs), to focus on a specific grammatical structural phenomenon -- subject-verb agreement across a variety of sentence structures -- in several languages. Finding a solution to this task requires a system detecting complex linguistic patterns and paradigms in text representations. Using a two-level architecture that solves the problem in two steps -- detect syntactic objects and their properties in individual sentences, and find patterns across an input sequence of sentences -- we show that despite having been trained on multilingual texts in a consistent manner, multilingual pretrained language models have language-specific differences, and syntactic structure is not shared, even across closely related languages.
翻译:本文旨在探究多语言预训练语言模型在何种程度上捕捉跨语言有效的抽象语言表征。我们采用大规模构建具有特定属性的精选合成数据的方法,并利用这些数据研究基于预训练语言模型构建的句子表征。通过新型选择题任务及数据集——Blackbird Language Matrices (BLMs),我们聚焦于多种语言中特定的语法结构现象:不同句式结构下的主谓一致关系。解决该任务要求系统能够检测文本表征中的复杂语言模式和范式。采用两级架构(首先检测单句中的句法对象及其属性,随后在输入句子序列中发现跨句模式)进行分析,我们发现:尽管多语言预训练语言模型接受了一致性的多语言文本训练,但仍存在语言特异性差异,且句法结构并未实现跨语言共享——即使在亲缘关系密切的语言之间亦是如此。