Bayesian regression determines model parameters by minimizing the expected loss, an upper bound to the true generalization error. However, the loss ignores misspecification, where models are imperfect. Parameter uncertainties from Bayesian regression are thus significantly underestimated and vanish in the large data limit. This is particularly problematic when building models of low-noise, or near-deterministic, calculations, as the main source of uncertainty is neglected. We analyze the generalization error of misspecified, near-deterministic surrogate models, a regime of broad relevance in science and engineering. We show posterior distributions must cover every training point to avoid a divergent generalization error and design an ansatz that respects this constraint, which for linear models incurs minimal overhead. This is demonstrated on model problems before application to thousand dimensional datasets in atomistic machine learning. Our efficient misspecification-aware scheme gives accurate prediction and bounding of test errors where existing schemes fail, allowing this important source of uncertainty to be incorporated in computational workflows.
翻译:贝叶斯回归通过最小化期望损失来确定模型参数,该期望损失是真实泛化误差的上界。然而,该损失忽略了模型不完美的误设情况。因此,贝叶斯回归得到的参数不确定性被显著低估,并在大数据极限下趋于消失。这在构建低噪声或近确定性计算的模型时尤为成问题,因为主要的不确定性来源被忽略了。我们分析了误设的近确定性代理模型的泛化误差,这一区域在科学与工程领域具有广泛相关性。我们证明后验分布必须覆盖所有训练点以避免发散的泛化误差,并设计了一个满足此约束的试探解,对于线性模型而言,该解仅产生最小开销。我们首先在模型问题上验证了该方法,随后将其应用于原子尺度机器学习中的千维数据集。我们高效的误设感知方案能够在现有方案失效时,对测试误差进行准确预测和界定,从而使得这一重要的不确定性来源能够被纳入计算工作流程中。