Deep learning architectures for supervised learning on tabular data range from simple multilayer perceptrons (MLP) to sophisticated Transformers and retrieval-augmented methods. This study highlights a major, yet so far overlooked opportunity for substantially improving tabular MLPs: namely, parameter-efficient ensembling -- a paradigm for implementing an ensemble of models as one model producing multiple predictions. We start by developing TabM -- a simple model based on MLP and our variations of BatchEnsemble (an existing technique). Then, we perform a large-scale evaluation of tabular DL architectures on public benchmarks in terms of both task performance and efficiency, which renders the landscape of tabular DL in a new light. Generally, we show that MLPs, including TabM, form a line of stronger and more practical models compared to attention- and retrieval-based architectures. In particular, we find that TabM demonstrates the best performance among tabular DL models. Lastly, we conduct an empirical analysis on the ensemble-like nature of TabM. For example, we observe that the multiple predictions of TabM are weak individually, but powerful collectively. Overall, our work brings an impactful technique to tabular DL, analyses its behaviour, and advances the performance-efficiency trade-off with TabM -- a simple and powerful baseline for researchers and practitioners.
翻译:用于表格数据监督学习的深度学习架构范围广泛,从简单的多层感知机(MLP)到复杂的Transformer和检索增强方法。本研究强调了一个重要但迄今被忽视的、能显著改进表格MLP的机会:即参数高效集成——一种将模型集合实现为单个模型并产生多个预测的范式。我们首先开发了TabM——一个基于MLP及我们对BatchEnsemble(一种现有技术)的改进变体的简单模型。随后,我们在公共基准测试上对表格深度学习架构进行了大规模评估,涵盖任务性能和效率两个方面,从而以新的视角揭示了表格深度学习的现状。总体而言,我们表明,包括TabM在内的MLP,与基于注意力和检索的架构相比,构成了一系列更强大且更实用的模型。特别是,我们发现TabM在表格深度学习模型中表现出最佳性能。最后,我们对TabM的类集成特性进行了实证分析。例如,我们观察到TabM的多个预测单独来看较弱,但集体来看却非常强大。总的来说,我们的工作为表格深度学习带来了一项有影响力的技术,分析了其行为,并通过TabM——一个为研究者和实践者提供的简单而强大的基线——推进了性能与效率的权衡。