The long-standing dominance of gradient-boosted decision trees for tabular data has recently been challenged by in-context learning tabular foundation models. In-context learning methods fit and predict in one forward pass without parameter updates by leveraging the training data as context for predicting on query test points. While recent tabular foundation models achieve state-of-the-art performance, their transformer architecture based on the attention mechanism has quadratic complexity regarding dataset size, which in turn increases the overhead on training and inference time, and limits the capacity of the models to handle large-scale datasets. In this work, we propose TACO, an end-to-end tabular compression model that compresses the training dataset in a latent space. We test our method on the TabArena benchmark, where our proposed method is up to 94x faster in inference time, while consuming up to 97\% less memory compared to the state-of-the-art tabular transformer architecture, all while retaining performance without significant degradation. Lastly, our method not only scales better with increased dataset sizes, but it also achieves better performance compared to other baselines.
翻译:长期以来,梯度提升决策树在表格数据领域占据主导地位,但近期基于上下文学习的表格基础模型对此提出了挑战。上下文学习方法通过将训练数据作为预测查询测试点的上下文,无需参数更新即可在前向传播过程中同时完成拟合与预测。尽管当前先进的表格基础模型实现了最先进的性能,但其基于注意力机制的Transformer架构在数据集规模上具有二次复杂度,这增加了训练和推理时间的开销,并限制了模型处理大规模数据集的能力。本研究提出TACO——一种端到端的表格压缩模型,可在潜在空间中对训练数据集进行压缩。我们在TabArena基准测试中验证了所提方法,结果表明:与最先进的表格Transformer架构相比,我们的方法推理速度最高提升94倍,内存消耗最多降低97%,同时性能未出现显著下降。最后,我们的方法不仅随着数据集规模扩大展现出更优的扩展性,其性能也优于其他基线模型。