This article addresses online variational estimation in parametric state-space models. We propose a new procedure for efficiently computing the evidence lower bound and its gradient in a streaming-data setting, where observations arrive sequentially. The algorithm allows for the simultaneous training of the model parameters and the distribution of the latent states given the observations. It is based on i.i.d. Monte Carlo sampling, coupled with a well-chosen deep architecture, enabling both computational efficiency and flexibility. The performance of the method is illustrated on both synthetic data and real-world air-quality data. The proposed approach is theoretically motivated by the existence of an asymptotic contrast function and the ergodicity of the underlying Markov chain, and applies more generally to the computation of additive expectations under posterior distributions in state-space models.
翻译:本文研究参数化状态空间模型中的在线变分估计问题。我们提出了一种新方法,用于在流式数据(观测值顺序到达)场景中高效计算证据下界及其梯度。该算法支持同时训练模型参数以及给定观测值下隐状态的分布。该方法基于独立同分布的蒙特卡洛采样,并结合精心设计的深度架构,兼具计算效率与灵活性。我们通过合成数据与真实空气质量数据验证了该方法的性能。所提方法在理论层面由渐近对比函数的存在性及底层马尔可夫链的遍历性所支撑,并可更广泛地应用于状态空间模型后验分布下可加性期望的计算。