Test collections are information retrieval tools that allow researchers to quickly and easily evaluate ranking algorithms. While test collections have become an integral part of IR research, the process of data creation involves significant efforts in manual annotations, which often makes it very expensive and time-consuming. Thus, the test collections could become small when the budget is limited, which may lead to unstable evaluations. As an alternative, recent studies have proposed the use of large language models (LLMs) to completely replace human assessors. However, while LLMs seem to somewhat correlate with human judgments, they are not perfect and often show bias. Moreover, even if a well-performing LLM or prompt is found on one dataset, there is no guarantee that it will perform similarly in practice, due to difference in tasks and data. Thus a complete replacement with LLMs is argued to be too risky and not fully trustable. Thus, in this paper, we propose \textbf{L}LM-\textbf{A}ssisted \textbf{R}elevance \textbf{A}ssessments (\textbf{LARA}), an effective method to balance manual annotations with LLM annotations, which helps to make a rich and reliable test collection. We use the LLM's predicted relevance probabilities in order to select the most profitable documents to manually annotate under a budget constraint. While solely relying on LLM's predicted probabilities to manually annotate performs fairly well, with theoretical reasoning, LARA guides the human annotation process even more effectively via online calibration learning. Then, using the calibration model learned from the limited manual annotations, LARA debiases the LLM predictions to annotate the remaining non-assessed data. Empirical evaluations on TREC-COVID and TREC-8 Ad Hoc datasets show that LARA outperforms the alternative solutions under almost any budget constraint.
翻译:测试集是信息检索领域的重要工具,能够帮助研究者快速便捷地评估排序算法性能。尽管测试集已成为信息检索研究中不可或缺的组成部分,但其构建过程需要大量人工标注工作,往往成本高昂且耗时漫长。在预算有限的情况下,测试集规模可能被迫缩小,从而导致评估结果不稳定。作为替代方案,近期研究提出使用大语言模型完全取代人工评估者。然而,尽管LLM与人类判断存在一定相关性,但其评估并不完美且常表现出偏差。更重要的是,即使在某个数据集上找到了表现优异的LLM或提示模板,由于任务与数据的差异性,也无法保证其在实践中的表现同样稳定。因此,完全依赖LLM进行替代被认为风险过高且可信度不足。为此,本文提出\textbf{L}LM-\textbf{A}ssisted \textbf{R}elevance \textbf{A}ssessments(\textbf{LARA})方法,通过平衡人工标注与LLM标注来构建丰富可靠的测试集。该方法利用LLM预测的相关性概率,在预算约束下选择最具标注价值的文档进行人工标注。虽然单纯依赖LLM预测概率进行人工标注已能取得较好效果,但LARA通过在线校准学习机制,基于理论推导进一步优化人工标注流程。随后,利用有限人工标注数据学习得到的校准模型,LARA对LLM预测结果进行去偏处理,进而完成剩余未评估数据的标注。在TREC-COVID和TREC-8 Ad Hoc数据集上的实证评估表明,LARA在几乎所有预算约束条件下均优于现有替代方案。