Benchmark quality is critical for meaningful evaluation and sustained progress in time series forecasting, particularly with the rise of pretrained models. Existing benchmarks often have limited domain coverage or overlook real-world settings such as tasks with covariates. Their aggregation procedures frequently lack statistical rigor, making it unclear whether observed performance differences reflect true improvements or random variation. Many benchmarks lack consistent evaluation infrastructure or are too rigid for integration into existing pipelines. To address these gaps, we propose fev-bench, a benchmark of 100 forecasting tasks across seven domains, including 46 with covariates. Supporting the benchmark, we introduce fev, a lightweight Python library for forecasting evaluation emphasizing reproducibility and integration with existing workflows. Using fev, fev-bench employs principled aggregation with bootstrapped confidence intervals to report performance along two dimensions: win rates and skill scores. We report results on fev-bench for pretrained, statistical, and baseline models and identify promising future research directions.
翻译:基准质量对于时间序列预测领域的有意义评估和持续进步至关重要,尤其是在预训练模型兴起的背景下。现有基准往往存在领域覆盖有限或忽视现实世界设置(如包含协变量的任务)的问题。其聚合流程通常缺乏统计严谨性,导致观察到的性能差异究竟反映真实改进还是随机波动难以判断。许多基准缺乏一致的评估基础设施,或过于僵化而难以集成到现有流程中。为弥补这些不足,我们提出了fev-bench——一个涵盖七个领域共100项预测任务的基准,其中包含46项带协变量的任务。为支撑该基准,我们同时推出了fev——一个轻量级Python评估库,其强调可复现性及与现有工作流的集成。基于fev,fev-bench采用基于自助法置信区间的原则性聚合方法,从胜率和技能分数两个维度报告性能。我们在fev-bench上对预训练模型、统计模型及基线模型进行了测试,并指出了有前景的未来研究方向。