Transformer-based text embedding models have improved their performance on benchmarks like MIRACL and BEIR by increasing their parameter counts. However, this scaling approach introduces significant deployment challenges, including increased inference latency and memory usage. These challenges are particularly severe in retrieval-augmented generation (RAG) applications, where large models' increased memory requirements constrain dataset ingestion capacity, and their higher latency directly impacts query-time performance. While causal language models have addressed similar efficiency challenges using Mixture of Experts (MoE) architectures, this approach hasn't been successfully adapted to the general text embedding setting. In this paper, we introduce Nomic Embed v2, the first general purpose MoE text embedding model. Our model outperforms models in the same parameter class on both monolingual and multilingual benchmarks while also maintaining competitive performance with models twice its size. We open-source all code, models, and evaluation data to ensure full reproducibility of our training pipeline.
翻译:基于Transformer的文本嵌入模型通过增加参数量,已在MIRACL和BEIR等基准测试中提升了性能。然而,这种扩展方法带来了显著的部署挑战,包括推理延迟增加和内存使用量上升。这些挑战在检索增强生成(RAG)应用中尤为严重:大型模型增加的内存需求限制了数据集处理能力,而其更高的延迟则直接影响查询时的性能。虽然因果语言模型已通过专家混合(MoE)架构解决了类似的效率挑战,但该方法尚未成功应用于通用文本嵌入场景。本文提出了Nomic Embed v2——首个通用MoE文本嵌入模型。该模型在单语和多语言基准测试中均优于同参数量级的模型,同时还能与两倍规模的模型保持竞争性性能。我们开源了所有代码、模型和评估数据,以确保训练流程的完全可复现性。