Test collections are information retrieval tools that allow researchers to quickly and easily evaluate ranking algorithms. While test collections have become an integral part of IR research, the process of data creation involves significant effort in manual annotations, which often makes it very expensive and time-consuming. Thus, test collections could become too small when the budget is limited, which may lead to unstable evaluations. As a cheaper alternative, recent studies have proposed the use of large language models (LLMs) to completely replace human assessors. However, while LLMs may seem to somewhat correlate with human judgments, their predictions are not perfect and often show bias. Thus a complete replacement with LLMs is argued to be too risky and not fully reliable. Thus, in this paper, we propose LLM-Assisted Relevance Assessments (LARA), an effective method to balance manual annotations with LLM annotations, which helps to build a rich and reliable test collection even under a low budget. We use the LLM's predicted relevance probabilities to select the most profitable documents to manually annotate under a budget constraint. With theoretical reasoning, LARA effectively guides the human annotation process by actively learning to calibrate the LLM's predicted relevance probabilities. Then, using the calibration model learned from the limited manual annotations, LARA debiases the LLM predictions to annotate the remaining non-assessed data. Empirical evaluations on TREC-7 Ad Hoc, TREC-8 Ad Hoc, TREC Robust 2004, and TREC-COVID datasets show that LARA outperforms alternative solutions under almost any budget constraint.
翻译:测试集是信息检索领域的重要工具,能够帮助研究者快速便捷地评估排序算法。尽管测试集已成为信息检索研究的核心组成部分,但其创建过程需要耗费大量人工标注工作,导致成本高昂且耗时漫长。在预算有限的情况下,测试集规模可能过小,从而引发评估结果不稳定的问题。作为更经济的替代方案,近期研究提出使用大语言模型完全替代人工评估者。然而,虽然LLM的判断与人工评估存在一定相关性,但其预测并不完美且常呈现偏差。因此完全依赖LLM被认为风险过高且可靠性不足。本文提出LLM辅助相关性评估方法,通过平衡人工标注与LLM标注,在有限预算下构建丰富可靠的测试集。该方法利用LLM预测的相关性概率,在预算约束下选择最具标注价值的文档进行人工标注。通过理论推导,LARA能够主动学习校准LLM预测的相关性概率,从而有效指导人工标注流程。基于有限人工标注数据学习得到的校准模型,LARA可对剩余未标注数据进行去偏的LLM预测标注。在TREC-7 Ad Hoc、TREC-8 Ad Hoc、TREC Robust 2004和TREC-COVID数据集上的实证评估表明,LARA在几乎所有预算条件下均优于其他替代方案。