This work aims to improve generalization and interpretability of dynamical systems by recovering the underlying lower-dimensional latent states and their time evolutions. Previous work on disentangled representation learning within the realm of dynamical systems focused on the latent states, possibly with linear transition approximations. As such, they cannot identify nonlinear transition dynamics, and hence fail to reliably predict complex future behavior. Inspired by the advances in nonlinear ICA, we propose a state-space modeling framework in which we can identify not just the latent states but also the unknown transition function that maps the past states to the present. We introduce a practical algorithm based on variational auto-encoders and empirically demonstrate in realistic synthetic settings that we can (i) recover latent state dynamics with high accuracy, (ii) correspondingly achieve high future prediction accuracy, and (iii) adapt fast to new environments.
翻译:本研究旨在通过恢复底层低维潜在状态及其时间演化,提升动力系统的泛化能力与可解释性。先前在动力系统领域进行的解耦表征学习研究主要聚焦于潜在状态,可能辅以线性转移近似。因此,这些方法无法识别非线性转移动力学,从而难以可靠预测复杂的未来行为。受非线性独立成分分析进展的启发,我们提出一种状态空间建模框架,该框架不仅能识别潜在状态,还能识别将过去状态映射至当前状态的未知转移函数。我们引入一种基于变分自编码器的实用算法,并在现实合成场景中通过实证证明:该算法能够(i)高精度地恢复潜在状态动力学,(ii)相应地实现较高的未来预测精度,且(iii)快速适应新环境。