Black-box variational inference (BBVI) now sees widespread use in machine learning and statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for approximate Bayesian inference. However, stochastic optimization methods for BBVI remain unreliable and require substantial expertise and hand-tuning to apply effectively. In this paper, we propose Robust and Automated Black-box VI (RABVI), a framework for improving the reliability of BBVI optimization. RABVI is based on rigorously justified automation techniques, includes just a small number of intuitive tuning parameters, and detects inaccurate estimates of the optimal variational approximation. RABVI adaptively decreases the learning rate by detecting convergence of the fixed--learning-rate iterates, then estimates the symmetrized Kullback--Leibler (KL) divergence between the current variational approximation and the optimal one. It also employs a novel optimization termination criterion that enables the user to balance desired accuracy against computational cost by comparing (i) the predicted relative decrease in the symmetrized KL divergence if a smaller learning were used and (ii) the predicted computation required to converge with the smaller learning rate. We validate the robustness and accuracy of RABVI through carefully designed simulation studies and on a diverse set of real-world model and data examples.
翻译:黑盒变分推断(BBVI)如今在机器学习和统计学中作为马尔可夫链蒙特卡洛方法的快速且灵活的替代方案,广泛应用于近似贝叶斯推断。然而,BBVI的随机优化方法仍不可靠,且需要大量专业知识和手动调参才能有效应用。本文提出鲁棒与自动化黑盒变分推断(RABVI),一个旨在提升BBVI优化可靠性的框架。RABVI基于严格论证的自动化技术,仅包含少量直观的调参参数,并能检测最优变分近似的错误估计。RABVI通过检测固定学习率迭代的收敛性来自适应降低学习率,进而估计当前变分近似与最优近似之间的对称KL散度。此外,它采用一种新颖的优化终止准则,通过比较(i)若使用更小学习率时对称KL散度的预测相对减少量,与(ii)使用更小学习率收敛所需的预测计算量,使用户能够在期望精度与计算成本之间取得平衡。我们通过精心设计的仿真研究及多样化的真实模型与数据实例,验证了RABVI的鲁棒性与准确性。