Encoder-only transformer models such as BERT offer a great performance-size tradeoff for retrieval and classification tasks with respect to larger decoder-only models. Despite being the workhorse of numerous production pipelines, there have been limited Pareto improvements to BERT since its release. In this paper, we introduce ModernBERT, bringing modern model optimizations to encoder-only models and representing a major Pareto improvement over older encoders. Trained on 2 trillion tokens with a native 8192 sequence length, ModernBERT models exhibit state-of-the-art results on a large pool of evaluations encompassing diverse classification tasks and both single and multi-vector retrieval on different domains (including code). In addition to strong downstream performance, ModernBERT is also the most speed and memory efficient encoder and is designed for inference on common GPUs.
翻译:仅含编码器的Transformer模型(如BERT)在检索与分类任务上相较于更大的仅含解码器模型提供了优异的性能-规模权衡。尽管BERT已成为众多生产流水线的主力模型,但自其发布以来,针对它的帕累托改进一直有限。本文提出ModernBERT,将现代模型优化技术引入仅含编码器模型,实现了对旧式编码器的重大帕累托改进。ModernBERT模型在2万亿词元上训练,原生支持8192序列长度,在涵盖多样化分类任务以及不同领域(包括代码)的单向量与多向量检索的大规模评估池中均展现出最先进的性能。除强大的下游性能外,ModernBERT同时也是速度最快、内存效率最高的编码器,专为在常见GPU上进行推理而设计。