The Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) is a natural Bayesian nonparametric extension of the classical Hidden Markov Model for learning from (spatio-)temporal data. A sticky HDP-HMM has been proposed to strengthen the self-persistence probability in the HDP-HMM. Then, disentangled sticky HDP-HMM has been proposed to disentangle the strength of the self-persistence prior and transition prior. However, the sticky HDP-HMM assumes that the self-persistence probability is stationary, limiting its expressiveness. Here, we build on previous work on sticky HDP-HMM and disentangled sticky HDP-HMM, developing a more general model: the recurrent sticky HDP-HMM (RS-HDP-HMM). We develop a novel Gibbs sampling strategy for efficient inference in this model. We show that RS-HDP-HMM outperforms disentangled sticky HDP-HMM, sticky HDP-HMM, and HDP-HMM in both synthetic and real data segmentation.
翻译:分层狄利克雷过程隐马尔可夫模型(HDP-HMM)是经典隐马尔可夫模型的一种自然贝叶斯非参数扩展,适用于从(时空)序列数据中学习。粘性HDP-HMM被提出以增强HDP-HMM中的状态自保持概率。随后,解耦粘性HDP-HMM被提出,以区分自保持先验与转移先验的强度。然而,粘性HDP-HMM假设自保持概率是平稳的,这限制了其表达能力。本文基于先前关于粘性HDP-HMM和解耦粘性HDP-HMM的研究,发展了一个更通用的模型:循环粘性分层狄利克雷过程隐马尔可夫模型(RS-HDP-HMM)。我们为该模型开发了一种新颖的吉布斯采样策略以实现高效推断。实验表明,在合成数据与真实数据的分割任务中,RS-HDP-HMM的性能均优于解耦粘性HDP-HMM、粘性HDP-HMM和HDP-HMM。