Bayesian inference often faces a trade-off between computational speed and sampling accuracy. We propose an adaptive workflow that integrates rapid amortized inference with gold-standard MCMC techniques to achieve a favorable combination of both speed and accuracy when performing inference on many observed datasets. Our approach uses principled diagnostics to guide the choice of inference method for each dataset, moving along the Pareto front from fast amortized sampling via generative neural networks to slower but guaranteed-accurate MCMC when needed. By reusing computations across steps, our workflow synergizes amortized and MCMC-based inference. We demonstrate the effectiveness of this integrated approach on several synthetic and real-world problems with tens of thousands of datasets, showing efficiency gains while maintaining high posterior quality.
翻译:贝叶斯推断常面临计算速度与采样精度之间的权衡。本文提出一种自适应工作流,将快速摊销推断与黄金标准MCMC技术相结合,在对大量观测数据集执行推断时实现速度与精度的优化平衡。该方法通过原则性诊断指导各数据集的推断方法选择,沿帕累托前沿从基于生成神经网络的快速摊销采样,逐步过渡至必要时虽缓慢但保证精度的MCMC。通过跨步骤复用计算,本工作流实现了摊销推断与MCMC推断的协同增效。我们在数万个数据集的合成及现实问题上验证了该集成方法的有效性,在保持高后验质量的同时显著提升了计算效率。