Making the relevance judgments for a TREC-style test collection can be complex and expensive. A typical TREC track usually involves a team of six contractors working for 2-4 weeks. Those contractors need to be trained and monitored. Software has to be written to support recording relevance judgments correctly and efficiently. The recent advent of large language models that produce astoundingly human-like flowing text output in response to a natural language prompt has inspired IR researchers to wonder how those models might be used in the relevance judgment collection process. At the ACM SIGIR 2024 conference, a workshop ``LLM4Eval'' provided a venue for this work, and featured a data challenge activity where participants reproduced TREC deep learning track judgments, as was done by Thomas et al (arXiv:2408.08896, arXiv:2309.10621). I was asked to give a keynote at the workshop, and this paper presents that keynote in article form. The bottom-line-up-front message is, don't use LLMs to create relevance judgments for TREC-style evaluations.
翻译:为TREC风格测试集制作相关性判定可能既复杂又昂贵。典型的TREC赛道通常需要六名外包人员工作2-4周。这些人员需要接受培训和监督,还需编写专用软件以确保相关性判定能被正确高效地记录。近期,能够根据自然语言提示生成惊人拟人化流畅文本的大型语言模型的出现,激发了信息检索研究者对其在相关性判定收集中应用潜力的探索。在ACM SIGIR 2024会议上,"LLM4Eval"研讨会为此类研究提供了交流平台,并设立了数据挑战活动,要求参与者复现TREC深度学习赛道的判定结果——这与Thomas等人(arXiv:2408.08896, arXiv:2309.10621)的研究方法一致。本人受邀在该研讨会上发表主题演讲,本文即该演讲的文稿形式。核心结论明确:切勿使用大型语言模型为TREC风格评估创建相关性判定。