In the Retrieval-Augmented Generation (RAG) system, advanced Large Language Models (LLMs) have emerged as effective Query Likelihood Models (QLMs) in an unsupervised way, which re-rank documents based on the probability of generating the query given the content of a document. However, directly prompting LLMs to approximate QLMs inherently is biased, where the estimated distribution might diverge from the actual document-specific distribution. In this study, we introduce a novel framework, $\mathrm{UR^3}$, which leverages Bayesian decision theory to both quantify and mitigate this estimation bias. Specifically, $\mathrm{UR^3}$ reformulates the problem as maximizing the probability of document generation, thereby harmonizing the optimization of query and document generation probabilities under a unified risk minimization objective. Our empirical results indicate that $\mathrm{UR^3}$ significantly enhances re-ranking, particularly in improving the Top-1 accuracy. It benefits the QA tasks by achieving higher accuracy with fewer input documents.
翻译:在检索增强生成(RAG)系统中,先进的大语言模型(LLMs)已作为一种有效的无监督查询似然模型(QLMs)出现,其根据给定文档内容生成查询的概率对文档进行重排序。然而,直接提示LLMs近似QLM本质上存在偏差,其估计的分布可能与实际的文档特定分布发生偏离。在本研究中,我们引入了一个新颖的框架 $\mathrm{UR^3}$,该框架利用贝叶斯决策理论来量化和缓解这种估计偏差。具体而言,$\mathrm{UR^3}$ 将问题重新表述为最大化文档生成概率,从而在一个统一的风险最小化目标下,协调查询生成概率与文档生成概率的优化。我们的实证结果表明,$\mathrm{UR^3}$ 显著提升了重排序性能,特别是在提高Top-1准确率方面。它通过使用更少的输入文档实现更高的准确率,从而有益于问答任务。