Amortized Bayesian Inference (ABI) enables efficient posterior estimation using generative neural networks trained on simulated data, but often suffers from performance degradation under model misspecification. While self-consistency (SC) training on unlabeled empirical data can enhance network robustness, current approaches are limited to static, single-task settings and fail to handle sequentially arriving data or distribution shifts. We propose a continual learning framework for ABI that decouples simulation-based pre-training from unsupervised sequential SC fine-tuning on real-world data. To address the challenge of catastrophic forgetting, we introduce two adaptation strategies: (1) SC with episodic replay, utilizing a memory buffer of past observations, and (2) SC with elastic weight consolidation, which regularizes updates to preserve task-critical parameters. Across three diverse case studies, our methods significantly mitigate forgetting and yield posterior estimates that outperform standard simulation-based training, achieving estimates closer to MCMC reference, providing a viable path for trustworthy ABI across a range of different tasks.
翻译:摊销贝叶斯推断(ABI)利用在模拟数据上训练的生成神经网络实现高效后验估计,但在模型设定错误时常常面临性能下降问题。尽管在无标签经验数据上进行自洽性(SC)训练可以增强网络鲁棒性,但现有方法仅限于静态单任务场景,无法处理顺序到达的数据或分布变化。我们提出了一种用于ABI的持续学习框架,该框架将基于模拟的预训练与在真实世界数据上进行的无监督顺序SC微调解耦。为应对灾难性遗忘的挑战,我们引入了两种适应策略:(1)采用情景回放机制的SC,利用存储过去观测数据的内存缓冲区;(2)采用弹性权重巩固机制的SC,通过正则化更新来保留任务关键参数。在三个不同的案例研究中,我们的方法显著减轻了遗忘现象,并产生了优于标准基于模拟训练的后验估计结果,其估计值更接近MCMC参考基准,为跨多种不同任务实现可信赖的ABI提供了可行路径。