A syntactic language model (SLM) incrementally generates a sentence with its syntactic tree in a left-to-right manner. We present Generative Pretrained Structured Transformers (GPST), an unsupervised SLM at scale capable of being pre-trained from scratch on raw texts with high parallelism. GPST circumvents the limitations of previous SLMs such as relying on gold trees and sequential training. It consists of two components, a usual SLM supervised by a uni-directional language modeling loss, and an additional composition model, which induces syntactic parse trees and computes constituent representations, supervised by a bi-directional language modeling loss. We propose a representation surrogate to enable joint parallel training of the two models in a hard-EM fashion. We pre-train GPST on OpenWebText, a corpus with $9$ billion tokens, and demonstrate the superiority of GPST over GPT-2 with a comparable size in numerous tasks covering both language understanding and language generation. Meanwhile, GPST also significantly outperforms existing unsupervised SLMs on left-to-right grammar induction, while holding a substantial acceleration on training.
翻译:句法语言模型(SLM)以从左到右的方式逐步生成句子及其句法树。本文提出生成式预训练结构化Transformer(GPST),这是一种能够从原始文本开始进行高并行度预训练的大规模无监督SLM。GPST规避了先前SLM的局限性,例如依赖标注树和顺序训练。它包含两个组件:一个由单向语言建模损失监督的常规SLM,以及一个额外的组合模型,该模型通过双向语言建模损失监督来推导句法分析树并计算成分表示。我们提出了一种表示替代方法,以硬EM方式实现两个模型的联合并行训练。我们在包含90亿词元的OpenWebText语料库上预训练GPST,并在涵盖语言理解和语言生成的众多任务中,证明了GPST在可比规模下优于GPT-2。同时,GPST在从左到右的语法归纳任务上也显著优于现有的无监督SLM,同时训练速度大幅提升。