This paper presents a variant of the Multinomial mixture model tailored for the unsupervised classification of short text data. Traditionally, the Multinomial probability vector in this hierarchical model is assigned a Dirichlet prior distribution. Here, however, we explore an alternative prior - the Beta-Liouville distribution - which offers a more flexible correlation structure than the Dirichlet. We examine the theoretical properties of the Beta-Liouville distribution, focusing on its conjugacy with the Multinomial likelihood. This property enables the derivation of update equations for a CAVI (Coordinate Ascent Variational Inference) variational algorithm, facilitating the approximate posterior estimation of model parameters. Additionally, we propose a stochastic variant of the CAVI algorithm that enhances scalability. The paper concludes with data examples that demonstrate effective strategies for setting the Beta-Liouville hyperparameters.
翻译:本文提出了一种针对短文本数据无监督分类而定制的多项混合模型变体。传统上,该层次模型中的多项概率向量被赋予狄利克雷先验分布。然而,本文探索了一种替代先验——Beta-Liouville分布,其提供了比狄利克雷分布更灵活的相关性结构。我们研究了Beta-Liouville分布的理论性质,重点关注其与多项似然的共轭性。该性质使得能够推导出CAVI(坐标上升变分推断)变分算法的更新方程,从而便于模型参数的近似后验估计。此外,我们提出了一种增强可扩展性的CAVI算法随机变体。本文最后通过数据实例展示了设置Beta-Liouville超参数的有效策略。