Large Language Models (LLMs) are foundational in language technologies, particularly in information retrieval (IR). Previous studies have utilized LLMs for query expansion, achieving notable improvements in IR. In this paper, we thoroughly explore the best practice of leveraging LLMs for query expansion. To this end, we introduce a training-free, straightforward yet effective framework called Multi-Text Generation Integration (\textsc{MuGI}). It leverages LLMs to generate multiple pseudo-references, integrating them with queries to enhance both sparse and dense retrievers. Our empirical findings reveal that: (1) Increasing the number of samples from LLMs benefits IR systems; (2) A balance between the query and pseudo-documents, and an effective integration strategy, is critical for high performance; (3) Contextual information from LLMs is essential, even boost a 23M model to outperform a 7B baseline model; (4) Pseudo relevance feedback can further calibrate queries for improved performance; and (5) Query expansion is widely applicable and versatile, consistently enhancing models ranging from 23M to 7B parameters. Our code and all generated references are made available at \url{https://github.com/lezhang7/Retrieval_MuGI}
翻译:大型语言模型(LLMs)是语言技术的基础,尤其在信息检索(IR)领域。先前的研究已利用LLMs进行查询扩展,在IR中取得了显著改进。本文深入探索了利用LLMs进行查询扩展的最佳实践。为此,我们提出了一种无需训练、简单而有效的框架,称为多文本生成集成(\textsc{MuGI})。该框架利用LLMs生成多个伪参考文档,并将其与查询相结合,以增强稀疏和稠密检索器的性能。我们的实证研究发现:(1)增加LLMs生成的样本数量对IR系统有益;(2)查询与伪文档之间的平衡以及有效的集成策略对于实现高性能至关重要;(3)LLMs提供的上下文信息至关重要,甚至能使一个23M参数的模型超越一个7B参数的基线模型;(4)伪相关反馈可以进一步校准查询以提升性能;(5)查询扩展具有广泛的适用性和通用性,能持续提升从23M到7B参数范围的模型性能。我们的代码及所有生成的参考文档已公开于 \url{https://github.com/lezhang7/Retrieval_MuGI}。