We present a simple way to merge masked language modeling with causal language modeling. This hybrid training objective results in a model that combines the strengths of both modeling paradigms within a single transformer stack: GPT-BERT can be transparently used like any standard causal or masked language model. We test the pretraining process that enables this flexible behavior on the BabyLM Challenge 2024. The results show that the hybrid pretraining outperforms masked-only or causal-only models. We openly release the models, training corpora and code.
翻译:我们提出了一种将掩码语言建模与因果语言建模相结合的简单方法。这种混合训练目标产生了一个模型,它在一个单一的 Transformer 堆栈内融合了两种建模范式的优势:GPT-BERT 可以像任何标准的因果或掩码语言模型一样透明地使用。我们在 BabyLM Challenge 2024 上测试了实现这种灵活行为的预训练过程。结果表明,混合预训练的性能优于仅使用掩码或仅使用因果的模型。我们公开了模型、训练语料库和代码。