TabPFN is a transformer that achieves state-of-the-art performance on supervised tabular tasks by amortizing Bayesian prediction into a single forward pass. However, there is currently no method for uncertainty decomposition in TabPFN. Because it behaves, in an idealised limit, as a Bayesian in-context learner, we cast the decomposition challenge as a Bayesian predictive inference (BPI) problem. The main computational tool in BPI, predictive Monte Carlo, is challenging to apply here as it requires simulating unmodeled covariates. We therefore pursue the asymptotic alternative, filling a gap in the theory for supervised settings by proving a predictive CLT under quasi-martingale conditions. We derive variance estimators determined by the volatility of predictive updates along the context. The resulting credible bands are fast to compute, target epistemic uncertainty, and achieve near-nominal frequentist coverage. For classification, we further obtain an entropy-based uncertainty decomposition.
翻译:TabPFN是一种Transformer模型,通过将贝叶斯预测摊销为单次前向传播,在监督式表格任务上实现了最先进的性能。然而,目前TabPFN中尚无不确定性分解的方法。由于其在理想化极限下表现为贝叶斯上下文学习器,我们将分解问题转化为贝叶斯预测推断(BPI)问题。BPI中的主要计算工具——预测蒙特卡洛方法在此难以直接应用,因其需要模拟未建模的协变量。因此我们采用渐近替代方案,通过证明拟鞅条件下的预测中心极限定理,填补了监督场景下的理论空白。我们推导出由上下文预测更新的波动性确定的方差估计量。所得可信带计算快速,以认知不确定性为目标,并达到接近名义频率的覆盖度。对于分类任务,我们进一步获得了基于熵的不确定性分解。