Information retrieval systems have been evaluated using the Cranfield paradigm for many years. This paradigm allows a systematic, fair, and reproducible evaluation of different retrieval methods in fixed experimental environments. However, real-world retrieval systems must cope with dynamic environments and temporal changes that affect the document collection, topical trends, and the individual user's perception of what is considered relevant. Yet, the temporal dimension in IR evaluations is still understudied. To this end, this work investigates how the temporal generalizability of effectiveness evaluations can be assessed. As a conceptual model, we generalize Cranfield-type experiments to the temporal context by classifying the change in the essential components according to the create, update, and delete operations of persistent storage known from CRUD. From the different types of change different evaluation scenarios are derived and it is outlined what they imply. Based on these scenarios, renowned state-of-the-art retrieval systems are tested and it is investigated how the retrieval effectiveness changes on different levels of granularity. We show that the proposed measures can be well adapted to describe the changes in the retrieval results. The experiments conducted confirm that the retrieval effectiveness strongly depends on the evaluation scenario investigated. We find that not only the average retrieval performance of single systems but also the relative system performance are strongly affected by the components that change and to what extent these components changed.
翻译:多年来,信息检索系统一直采用克兰菲尔德范式进行评估。该范式允许在固定的实验环境中对不同检索方法进行系统、公平且可复现的评估。然而,现实世界的检索系统必须应对动态环境和时序变化,这些变化会影响文档集合、主题趋势以及个体用户对相关性的认知。尽管如此,信息检索评估中的时间维度仍未得到充分研究。为此,本研究探讨了如何评估效果评估的时序泛化能力。作为概念模型,我们通过根据持久化存储中已知的CRUD(创建、更新、删除)操作对核心组件的变更进行分类,将克兰菲尔德型实验推广到时序语境中。基于不同类型的变更,我们推导出不同的评估场景,并阐述了这些场景所隐含的意义。基于这些场景,我们对当前知名的先进检索系统进行了测试,并研究了检索效果在不同粒度层级上的变化情况。研究表明,所提出的度量方法能够很好地描述检索结果的变化。实验证实,检索效果在很大程度上取决于所研究的评估场景。我们发现,不仅单个系统的平均检索性能会受到影响,系统间的相对性能也会因变更的组件及其变化程度而产生显著变化。