Large, high-quality datasets are crucial for training Large Language Models (LLMs). However, so far, there are few datasets available for specialized critical domains such as law and the available ones are often only for the English language. We curate and release MultiLegalPile, a 689GB corpus in 24 languages from 17 jurisdictions. The MultiLegalPile corpus, which includes diverse legal data sources with varying licenses, allows for pretraining NLP models under fair use, with more permissive licenses for the Eurlex Resources and Legal mC4 subsets. We pretrain two RoBERTa models and one Longformer multilingually, and 24 monolingual models on each of the language-specific subsets and evaluate them on LEXTREME. Additionally, we evaluate the English and multilingual models on LexGLUE. Our multilingual models set a new SotA on LEXTREME and our English models on LexGLUE. We release the dataset, the trained models, and all of the code under the most open possible licenses.
翻译:大型高质量数据集对于训练大型语言模型至关重要。然而,目前专门针对法律等关键领域的可用数据集寥寥无几,且现有数据集通常仅限英语。我们整理并发布了MultiLegalPile,该语料库包含来自17个司法管辖区的24种语言,总规模达689GB。MultiLegalPile语料库涵盖不同许可证条件下的多样化法律数据源,允许在合理使用范围内预训练自然语言处理模型,其中Eurlex资源与Legal mC4子集的许可证更为宽松。我们基于此语料库多语言预训练了两个RoBERTa模型与一个Longformer模型,并为每个语言特定子集训练了24个单语模型,在LEXTREME基准上进行评估。此外,我们还在LexGLUE基准上评估了英语模型与多语言模型的表现。我们的多语言模型在LEXTREME上刷新了当前最优水平,英语模型则在LexGLUE上达到新纪录。我们以最开放许可方式发布了该数据集、训练模型及全部代码。