Among other uses, neural networks are a powerful tool for solving deterministic and Bayesian inverse problems in real-time, where variational autoencoders, a specialized type of neural network, enable the Bayesian estimation of model parameters and their distribution from observational data allowing real-time inverse uncertainty quantification. In this work, we build upon existing research [Goh, H. et al., Proceedings of Machine Learning Research, 2022] by proposing a novel loss function to train variational autoencoders for Bayesian inverse problems. When the forward map is affine, we provide a theoretical proof of the convergence of the latent states of variational autoencoders to the posterior distribution of the model parameters. We validate this theoretical result through numerical tests and we compare the proposed variational autoencoder with the existing one in the literature both in terms of accuracy and generalization properties. Finally, we test the proposed variational autoencoder on a Laplace equation, with comparison to the original one and Markov Chains Monte Carlo.
翻译:神经网络是解决确定性与贝叶斯反问题的强大工具,其中变分自编码器作为一种特殊类型的神经网络,能够基于观测数据实现模型参数及其分布的贝叶斯估计,从而支持实时反演不确定性量化。本研究在现有工作[Goh, H. et al., Proceedings of Machine Learning Research, 2022]基础上,提出了一种用于训练贝叶斯反问题变分自编码器的新型损失函数。当前向映射为仿射变换时,我们提供了变分自编码器隐状态收敛至模型参数后验分布的理论证明。通过数值实验验证了该理论结果,并从精度与泛化性能两方面将所提变分自编码器与文献现有方法进行比较。最后,在拉普拉斯方程案例中测试了所提变分自编码器的性能,并与原始方法及马尔可夫链蒙特卡罗方法进行对比分析。