General state-space models (SSMs) are widely used in statistical machine learning and are among the most classical generative models for sequential time-series data. SSMs, comprising latent Markovian states, can be subjected to variational inference (VI), but standard VI methods like the importance-weighted autoencoder (IWAE) lack functionality for streaming data. To enable online VI in SSMs when the observations are received in real time, we propose maximising an IWAE-type variational lower bound on the asymptotic contrast function, rather than the standard IWAE ELBO, using stochastic approximation. Unlike the recursive maximum likelihood method, which directly maximises the asymptotic contrast, our approach, called online sequential IWAE (OSIWAE), allows for online learning of both model parameters and a Markovian recognition model for inferring latent states. By approximating filter state posteriors and their derivatives using sequential Monte Carlo (SMC) methods, we create a particle-based framework for online VI in SSMs. This approach is more theoretically well-founded than recently proposed online variational SMC methods. We provide rigorous theoretical results on the learning objective and a numerical study demonstrating the method's efficiency in learning model parameters and particle proposal kernels.
翻译:通用状态空间模型(SSMs)在统计机器学习中被广泛应用,是最经典的序列时间序列数据生成模型之一。包含潜在马尔可夫状态的SSMs可进行变分推断(VI),但标准VI方法如重要性加权自编码器(IWAE)缺乏处理流数据的功能。为了在观测数据实时到达时实现SSMs中的在线变分推断,我们提出使用随机近似方法最大化渐近对比函数上的IWAE型变分下界,而非标准的IWAE证据下界。与直接最大化渐近对比的递归最大似然方法不同,我们提出的在线序列IWAE(OSIWAE)方法能够在线学习模型参数和用于推断潜在状态的马尔可夫识别模型。通过使用序列蒙特卡洛(SMC)方法近似滤波状态后验及其导数,我们构建了一个基于粒子的SSMs在线变分推断框架。该方法比近期提出的在线变分SMC方法具有更坚实的理论基础。我们提供了关于学习目标的严格理论结果,并通过数值研究证明了该方法在学习模型参数和粒子提议核方面的有效性。