Pretrained transformers readily adapt to new sequence modeling tasks via zero-shot prompting, but relational domains still lack architectures that transfer across datasets and tasks. The core challenge is the diversity of relational data, with varying heterogeneous schemas, graph structures and functional dependencies. In this paper, we present the Relational Transformer (RT) architecture, which can be pretrained on diverse relational databases and directly applied to unseen datasets and tasks without task- or dataset-specific fine-tuning, or retrieval of in-context examples. RT (i) tokenizes cells with table/column metadata, (ii) is pretrained via masked token prediction, and (iii) utilizes a novel Relational Attention mechanism over columns, rows, and primary-foreign key links. Pretrained on RelBench datasets spanning tasks such as churn and sales forecasting, RT attains strong zero-shot performance, averaging 93% of fully supervised AUROC on binary classification tasks with a single forward pass of a 22M parameter model, as opposed to 84% for a 27B LLM. Fine-tuning yields state-of-the-art results with high sample efficiency. Our experiments show that RT's zero-shot transfer harnesses task-table context, relational attention patterns and schema semantics. Overall, RT provides a practical path toward foundation models for relational data.
翻译:预训练的Transformer模型能够通过零样本提示轻松适应新的序列建模任务,但关系型领域仍缺乏能够跨数据集和任务迁移的架构。核心挑战在于关系数据的多样性,包括异构的模式、图结构和函数依赖关系。本文提出关系型Transformer(RT)架构,该架构可在多样化的关系数据库上进行预训练,并直接应用于未见过的数据集和任务,无需针对特定任务或数据集的微调,也无需检索上下文示例。RT(i)使用表/列元数据对单元格进行标记化,(ii)通过掩码标记预测进行预训练,以及(iii)采用一种新颖的关系注意力机制,该机制覆盖列、行以及主键-外键链接。通过在涵盖客户流失和销售预测等任务的RelBench数据集上进行预训练,RT实现了强大的零样本性能:在二分类任务中,仅需一次2200万参数模型的前向传播,其AUROC平均达到全监督方法的93%,而一个270亿参数的大型语言模型仅能达到84%。微调后,RT能够以高样本效率取得最先进的结果。我们的实验表明,RT的零样本迁移利用了任务-表上下文、关系注意力模式以及模式语义。总体而言,RT为实现关系数据的基础模型提供了一条实用路径。