Recent advances in tabular in-context learning (ICL) show that a single pretrained model can adapt to new prediction tasks from a small set of labeled examples, avoiding per-task training and heavy tuning. However, many real-world tasks live in relational databases, where predictive signal is spread across multiple linked tables rather than a single flat table. We show that tabular ICL can be extended to relational prediction with a simple recipe: automatically featurize each target row using relational aggregations over its linked records, materialize the resulting augmented table, and run an off-the-shelf tabular foundation model on it. We package this approach in \textit{RDBLearn} (https://github.com/HKUSHXLab/rdblearn), an easy-to-use toolkit with a scikit-learn-style estimator interface that makes it straightforward to swap different tabular ICL backends; a complementary agent-specific interface is provided as well. Across a broad collection of RelBench and 4DBInfer datasets, RDBLearn is the best-performing foundation model approach we evaluate, at times even outperforming strong supervised baselines trained or fine-tuned on each dataset.
翻译:近年来,表格上下文学习(ICL)的研究进展表明,单个预训练模型能够通过少量标注示例适应新的预测任务,从而避免了针对每个任务进行单独训练和繁重的调优。然而,许多现实世界的任务存在于关系数据库中,其预测信号分布在多个相互关联的表中,而非单一的扁平表中。我们证明,表格ICL可以通过一个简单的方案扩展到关系预测:使用关系聚合对每个目标行及其关联记录进行自动特征化,物化得到的增强表,并在此表上运行一个现成的表格基础模型。我们将此方法封装于 \textit{RDBLearn} (https://github.com/HKUSHXLab/rdblearn) 中,这是一个易于使用的工具包,提供了类似 scikit-learn 的估计器接口,便于灵活替换不同的表格ICL后端;同时,也提供了一个互补的智能体专用接口。在涵盖 RelBench 和 4DBInfer 数据集的广泛实验中,RDBLearn 是我们评估的性能最佳的基础模型方法,有时甚至优于在每个数据集上训练或微调的强监督基线。