Language Models (LMs) struggle with linguistic understanding at the discourse level, even though discourse patterns such as coherence, cohesion, and narrative flow are prevalent in their pre-training data. To improve the discourse capabilities of LMs already at the pre-training stage, we introduce DEPTH, an encoder-decoder model that learns latent representations for sentences using a discourse-oriented pre-training objective. DEPTH combines hierarchical sentence representations with two objectives: (1) Sentence Un-Shuffling, and (2) Span-Corruption. Our approach trains the model to represent both sub-word-level and sentence-level dependencies over a pre-training corpora. When trained either from scratch or continuing from a pre-trained T5 checkpoint, DEPTH learns semantic and discourse-level representations faster than T5, outperforming it in span-corruption loss despite the additional sentence-un-shuffling objective. Evaluations on the GLUE, DiscoEval, and NI benchmarks demonstrate DEPTH's ability to quickly learn diverse downstream tasks, which require syntactic, semantic, and discourse capabilities. Our approach extends the discourse capabilities of T5, while minimally impacting other natural language understanding (NLU) capabilities in the resulting LM. We share our codebase for reproducibility: https://github.com/zbambergerNLP/depth.git.
翻译:语言模型(LMs)在语篇层面的语言理解上存在困难,尽管其预训练数据中普遍存在连贯性、衔接性和叙事流等语篇模式。为了在预训练阶段就提升语言模型的语篇能力,我们提出了DEPTH,这是一种编码器-解码器模型,它通过面向语篇的预训练目标学习句子的潜在表示。DEPTH将分层句子表示与两个目标相结合:(1) 句子去乱序,以及(2) 跨度损坏。我们的方法训练模型在预训练语料库上同时表示子词级别和句子级别的依赖关系。无论是从头开始训练,还是从预训练的T5检查点继续训练,DEPTH学习语义和语篇级别表示的速度都优于T5,尽管增加了句子去乱序目标,其在跨度损坏损失方面仍超越T5。在GLUE、DiscoEval和NI基准测试上的评估表明,DEPTH能够快速学习需要句法、语义和语篇能力的多样化下游任务。我们的方法扩展了T5的语篇能力,同时对最终语言模型的其他自然语言理解(NLU)能力影响最小。我们共享代码库以确保可复现性:https://github.com/zbambergerNLP/depth.git。