Benchmarks that closely reflect downstream application scenarios are essential for the streamlined adoption of new research in tabular machine learning (ML). In this work, we examine existing tabular benchmarks and find two common characteristics of industry-grade tabular data that are underrepresented in the datasets available to the academic community. First, tabular data often changes over time in real-world deployment scenarios. This impacts model performance and requires time-based train and test splits for correct model evaluation. Yet, existing academic tabular datasets often lack timestamp metadata to enable such evaluation. Second, a considerable portion of datasets in production settings stem from extensive data acquisition and feature engineering pipelines. For each specific dataset, this can have a different impact on the absolute and relative number of predictive, uninformative, and correlated features, which in turn can affect model selection. To fill the aforementioned gaps in academic benchmarks, we introduce TabReD -- a collection of eight industry-grade tabular datasets covering a wide range of domains from finance to food delivery services. We assess a large number of tabular ML models in the feature-rich, temporally-evolving data setting facilitated by TabReD. We demonstrate that evaluation on time-based data splits leads to different methods ranking, compared to evaluation on random splits more common in academic benchmarks. Furthermore, on the TabReD datasets, MLP-like architectures and GBDT show the best results, while more sophisticated DL models are yet to prove their effectiveness.
翻译:能够紧密反映下游应用场景的基准测试对于表格机器学习(ML)新研究的顺畅采用至关重要。在本工作中,我们审视了现有的表格基准测试,发现工业级表格数据中普遍存在的两个特征在学术界可用的数据集中代表性不足。首先,在现实世界的部署场景中,表格数据通常会随时间变化。这会影响模型性能,并需要基于时间的训练和测试分割来进行正确的模型评估。然而,现有的学术表格数据集往往缺乏支持此类评估的时间戳元数据。其次,生产环境中的相当一部分数据集源于广泛的数据采集和特征工程流水线。对于每个特定数据集,这会对预测性特征、无信息特征以及相关特征的绝对数量和相对数量产生不同的影响,进而可能影响模型选择。为了填补上述学术基准测试中的空白,我们引入了TabReD——一个包含八个工业级表格数据集的集合,涵盖了从金融到食品配送服务的广泛领域。我们在TabReD提供的特征丰富、随时间演变的数据设置下,评估了大量表格ML模型。我们证明,与学术基准测试中更常见的随机分割评估相比,基于时间的数据分割评估会导致不同的方法排名。此外,在TabReD数据集上,类MLP架构和GBDT显示出最佳结果,而更复杂的深度学习模型的有效性仍有待证明。