Language Model pre-training benefits from a broader data mixture to enhance performance across domains and languages. However, training on such heterogeneous text corpora is complex, requiring extensive and cost-intensive efforts. Since these data sources vary in lexical, syntactic, and semantic aspects, they cause negative interference or the "curse of multilinguality". We propose a novel pre-training framework to alleviate this curse. Our method, DEPT, decouples the embedding layers from the transformer body while simultaneously training the latter in multiple contexts. DEPT enables the model to train without being bound to a shared global vocabulary. DEPT: (1) can train robustly and effectively under significant data heterogeneity, (2) reduces the parameter count of the token embeddings by up to 80% and the communication costs by 675x for billion-scale models (3) enhances model generalization and plasticity in adapting to new languages and domains, and (4) allows training with custom optimized vocabulary per data source. We prove DEPT's potential by performing the first vocabulary-agnostic federated multilingual pre-training of a 1.3 billion-parameter model across high and low-resource languages, reducing its parameter count by 409 million.
翻译:语言模型预训练受益于更广泛的数据混合,以提升跨领域和跨语言的性能。然而,在此类异构文本语料库上进行训练是复杂的,需要大量且成本高昂的工作。由于这些数据源在词汇、句法和语义方面存在差异,它们会导致负面干扰或"多语言诅咒"。我们提出了一种新颖的预训练框架来缓解这种诅咒。我们的方法DEPT将嵌入层从Transformer主体中解耦,同时在多种上下文中训练后者。DEPT使模型能够在不绑定共享全局词汇表的情况下进行训练。DEPT:(1)能够在显著的数据异质性下稳健且有效地训练;(2)对于十亿级规模的模型,可将词元嵌入的参数数量减少高达80%,通信成本降低675倍;(3)增强了模型在适应新语言和新领域时的泛化能力和可塑性;(4)允许为每个数据源使用自定义优化的词汇表进行训练。我们通过在高低资源语言上首次进行与词汇表无关的联邦多语言预训练,对一个13亿参数的模型进行了验证,将其参数数量减少了4.09亿,从而证明了DEPT的潜力。