Information retrieval evaluation often suffers from fragmented practices -- varying dataset subsets, aggregation methods, and pipeline configurations -- that undermine reproducibility and comparability, especially for foundation embedding models requiring robust out-of-domain performance. We introduce SuiteEval, a unified framework that offers automatic end-to-end evaluation, dynamic indexing that reuses on-disk indices to minimise disk usage, and built-in support for major benchmarks (BEIR, LoTTE, MS MARCO, NanoBEIR, and BRIGHT). Users only need to supply a pipeline generator. SuiteEval handles data loading, indexing, ranking, metric computation, and result aggregation. New benchmark suites can be added in a single line. SuiteEval reduces boilerplate and standardises evaluations to facilitate reproducible IR research, as a broader benchmark set is increasingly required.
翻译:信息检索评估常因碎片化实践而受损——包括不同的数据集子集、聚合方法及流水线配置——这损害了可复现性与可比性,尤其对于需要强大跨域性能的基础嵌入模型。我们提出了SuiteEval,一个统一的框架,提供自动化的端到端评估、通过复用磁盘索引以最小化磁盘占用的动态索引机制,以及对主流基准测试(BEIR、LoTTE、MS MARCO、NanoBEIR和BRIGHT)的内置支持。用户仅需提供一个流水线生成器。SuiteEval负责数据加载、索引构建、排序、指标计算和结果聚合。新的基准测试套件可通过单行代码添加。随着更广泛的基准测试集日益成为必需,SuiteEval通过减少样板代码并标准化评估流程,促进了可复现的信息检索研究。