Recent approaches to improving the extraction of text embeddings from autoregressive large language models (LLMs) have largely focused on improvements to data, backbone pretrained language models, or improving task-differentiation via instructions. In this work, we address an architectural limitation of autoregressive models: token embeddings cannot contain information from tokens that appear later in the input. To address this limitation, we propose a simple approach, "echo embeddings," in which we repeat the input twice in context and extract embeddings from the second occurrence. We show that echo embeddings of early tokens can encode information about later tokens, allowing us to maximally leverage high-quality LLMs for embeddings. On the MTEB leaderboard, echo embeddings improve over classical embeddings by over 9% zero-shot and by around 0.7% when fine-tuned. Echo embeddings with a Mistral-7B model achieve state-of-the-art compared to prior open source models that do not leverage synthetic fine-tuning data.
翻译:近期针对自回归大语言模型(LLM)文本嵌入提取的改进工作主要集中于数据优化、骨干预训练语言模型增强,或通过指令提升任务区分能力。本研究聚焦自回归模型的架构局限性:词元嵌入无法包含输入中后续词元的信息。为解决该问题,我们提出一种简洁方法“回波嵌入”,即在上下文环境中重复输入两次并从第二次出现的词元中提取嵌入。实验表明,早期词元的回波嵌入能编码后续词元信息,从而最大化利用高质量LLM生成嵌入。在MTEB排行榜上,回波嵌入在零样本设置下相较经典嵌入提升超9%,微调后提升约0.7%。基于Mistral-7B模型的回波嵌入在不使用合成微调数据的前提下,相较现有开源模型实现了最先进性能。