Using large language models (LLMs) to annotate relevance is an increasingly important technique in the information retrieval community. While some studies demonstrate that LLMs can achieve high user agreement with ground truth (human) judgments, other studies have argued for the opposite conclusion. To the best of our knowledge, these studies have primarily focused on classic ad-hoc text search scenarios. In this paper, we conduct an analysis on user agreement between LLM and human experts, and explore the impact disagreement has on system rankings. In contrast to prior studies, we focus on a collection composed of audio files that are transcribed into two-minute segments -- the TREC 2020 and 2021 podcast track. We employ five different LLM models to re-assess all of the query-segment pairs, which were originally annotated by TREC assessors. Furthermore, we re-assess a small subset of pairs where LLM and TREC assessors have the highest disagreement, and found that the human experts tend to agree with LLMs more than with the TREC assessors. Our results reinforce the previous insights of Sormunen in 2002 -- that relying on a single assessor leads to lower user agreement.
翻译:利用大型语言模型进行相关性标注已成为信息检索领域日益重要的技术。尽管部分研究表明LLM能够与真实标注(人类判断)达成较高的用户一致性,但亦有研究得出了相反的结论。据我们所知,现有研究主要集中于经典的特设文本检索场景。本文通过分析LLM与人类专家之间的用户一致性,并探讨判断分歧对系统排序的影响,对此问题展开研究。与先前研究不同,我们聚焦于由音频文件转录而成的两分钟片段集合——即TREC 2020与2021播客赛道数据集。我们采用五种不同的LLM模型对全部查询-片段对进行重新评估(这些数据最初由TREC标注员完成标注),并针对LLM与TREC标注员分歧最显著的子集进行二次人工评估,发现人类专家更倾向于认同LLM的判断而非TREC标注员的原始标注。我们的结果强化了Sormunen于2002年提出的观点——依赖单一标注者会导致较低的用户一致性。