Transformer-based text embedding models have improved their performance on benchmarks like MIRACL and BEIR by increasing their parameter counts. However, this scaling approach introduces significant deployment challenges, including increased inference latency and memory usage. These challenges are particularly severe in retrieval-augmented generation (RAG) applications, where large models' increased memory requirements constrain dataset ingestion capacity, and their higher latency directly impacts query-time performance. While causal language models have addressed similar efficiency challenges using Mixture of Experts (MoE) architectures, this approach hasn't been successfully adapted to the general text embedding setting. In this paper, we introduce Nomic Embed v2, the first general purpose MoE text embedding model. Our model outperforms models in the same parameter class on both monolingual and multilingual benchmarks while also maintaining competitive performance with models twice its size. We open-source all code, models, and evaluation data to ensure full reproducibility of our training pipeline at \href{https://github.com/nomic-ai/contrastors}{https://github.com/nomic-ai/contrastors}.
翻译:基于Transformer的文本嵌入模型通过增加参数量,在MIRACL和BEIR等基准测试中提升了性能表现。然而,这种扩展方法带来了显著的部署挑战,包括推理延迟增加和内存占用上升。这些挑战在检索增强生成(RAG)应用中尤为严峻:大型模型增长的内存需求限制了数据集处理容量,其更高的延迟则直接影响查询时性能。虽然因果语言模型已通过专家混合(MoE)架构解决了类似的效率挑战,但该方法尚未成功应用于通用文本嵌入场景。本文提出了首个通用MoE文本嵌入模型Nomic Embed v2。该模型在单语和多语言基准测试中均优于同参数量级模型,同时保持与两倍规模模型相当的竞争性能。我们开源了所有代码、模型和评估数据,确保训练流程的完全可复现性,详见\href{https://github.com/nomic-ai/contrastors}{https://github.com/nomic-ai/contrastors}。