State-space models (SSMs) offer a powerful framework for dynamical system analysis, wherein the temporal dynamics of the system are assumed to be captured through the evolution of the latent states, which govern the values of the observations. This paper provides a selective review of recent advancements in deep neural network-based approaches for SSMs, and presents a unified perspective for discrete time deep state space models and continuous time ones such as latent neural Ordinary Differential and Stochastic Differential Equations. It starts with an overview of the classical maximum likelihood based approach for learning SSMs, reviews variational autoencoder as a general learning pipeline for neural network-based approaches in the presence of latent variables, and discusses in detail representative deep learning models that fall under the SSM framework. Very recent developments, where SSMs are used as standalone architectural modules for improving efficiency in sequence modeling, are also examined. Finally, examples involving mixed frequency and irregularly-spaced time series data are presented to demonstrate the advantage of SSMs in these settings.
翻译:状态空间模型为动态系统分析提供了一个强大的框架,其中系统的时序动态被假定为通过潜在状态的演化来捕捉,这些状态支配着观测值的变化。本文选择性综述了基于深度神经网络的状态空间模型方法的最新进展,并对离散时间深度状态空间模型与连续时间模型(如潜在神经常微分方程和随机微分方程)提供了统一视角。文章首先概述了基于经典最大似然估计的状态空间模型学习方法,回顾了变分自编码器作为存在潜在变量时基于神经网络方法的通用学习流程,并详细讨论了属于状态空间模型框架的代表性深度学习模型。本文还考察了将状态空间模型作为独立架构模块以提升序列建模效率的最新进展。最后,文章通过混合频率与不规则间隔时间序列数据的实例,展示了状态空间模型在这些场景中的优势。