We propose a scalable kinetic Langevin dynamics algorithm for sampling parameter spaces of big data and AI applications. Our scheme combines a symmetric forward/backward sweep over minibatches with a symmetric discretization of Langevin dynamics. For a particular Langevin splitting method (UBU), we show that the resulting Symmetric Minibatch Splitting-UBU (SMS-UBU) integrator has bias $O(h^2 d^{1/2})$ in dimension $d>0$ with stepsize $h>0$, despite only using one minibatch per iteration, thus providing excellent control of the sampling bias as a function of the stepsize. We apply the algorithm to explore local modes of the posterior distribution of Bayesian neural networks (BNNs) and evaluate the calibration performance of the posterior predictive probabilities for neural networks with convolutional neural network architectures for classification problems on three different datasets (Fashion-MNIST, Celeb-A and chest X-ray). Our results indicate that BNNs sampled with SMS-UBU can offer significantly better calibration performance compared to standard methods of training and stochastic weight averaging.
翻译:本文提出一种适用于大数据与人工智能应用参数空间采样的可扩展动力学朗之万算法。该方案将小批量数据的前向/后向对称扫描与朗之万动力学的对称离散化相结合。针对特定朗之万分裂方法(UBU),我们证明所得对称小批量分裂-UBU(SMS-UBU)积分器在维度$d>0$、步长$h>0$条件下具有$O(h^2 d^{1/2})$量级的偏差,且每次迭代仅需使用一个小批量数据,从而实现对采样偏差随步长变化的精确控制。我们将该算法应用于探索贝叶斯神经网络(BNNs)后验分布的局部模态,并针对卷积神经网络架构在三个不同数据集(Fashion-MNIST、Celeb-A和胸部X光片)上的分类问题,评估其后验预测概率的校准性能。实验结果表明,采用SMS-UBU采样的BNNs相较于标准训练方法及随机权重平均技术,能够提供显著更优的校准性能。