Modern probabilistic regressors often remain overconfident under distribution shift. Functional Distribution Networks (FDN) place input-conditioned distributions over network weights, producing predictive mixtures whose dispersion adapts to the input; we train them with a Monte Carlo beta-ELBO objective. We pair FDN with an evaluation protocol that separates interpolation from extrapolation and emphasizes simple OOD sanity checks. On controlled 1D tasks and small/medium UCI-style regression benchmarks, FDN remains competitive in accuracy with strong Bayesian, ensemble, dropout, and hypernetwork baselines, while providing strongly input-dependent, shift-aware uncertainty and competitive calibration under matched parameter and update budgets.
翻译:现代概率回归器在分布偏移下常保持过度自信。函数分布网络 (FDN) 在神经网络权重上建立输入条件分布,生成其离散度随输入自适应调整的预测混合分布;我们采用蒙特卡洛 beta-ELBO 目标对其进行训练。我们为 FDN 配套设计了将插值与外推分离的评估方案,并强调简单的分布外基础检验。在受控的一维任务及中小型 UCI 风格回归基准测试中,FDN 在精度方面与强贝叶斯、集成、dropout 及超网络基线保持竞争力,同时在匹配参数与更新预算条件下,提供显著输入依赖、偏移感知的不确定性以及具有竞争力的校准效果。