Recent representation learning approaches enhance neural topic models by optimizing the weighted linear combination of the evidence lower bound (ELBO) of the log-likelihood and the contrastive learning objective that contrasts pairs of input documents. However, document-level contrastive learning might capture low-level mutual information, such as word ratio, which disturbs topic modeling. Moreover, there is a potential conflict between the ELBO loss that memorizes input details for better reconstruction quality, and the contrastive loss which attempts to learn topic representations that generalize among input documents. To address these issues, we first introduce a novel contrastive learning method oriented towards sets of topic vectors to capture useful semantics that are shared among a set of input documents. Secondly, we explicitly cast contrastive topic modeling as a gradient-based multi-objective optimization problem, with the goal of achieving a Pareto stationary solution that balances the trade-off between the ELBO and the contrastive objective. Extensive experiments demonstrate that our framework consistently produces higher-performing neural topic models in terms of topic coherence, topic diversity, and downstream performance.
翻译:最近的表征学习方法通过优化证据下界(ELBO)与对比学习目标(对比输入文档对)的加权线性组合,增强了神经主题模型。然而,文档级对比学习可能捕捉到词比例等低层互信息,这干扰了主题建模。此外,记忆输入细节以提升重构质量的ELBO损失与试图学习在输入文档间泛化的主题表征的对比损失之间存在潜在冲突。为解决这些问题,我们首先提出了一种面向主题向量集合的新型对比学习方法,以捕捉一组输入文档中共有的有效语义。其次,我们明确将对比主题建模构建为基于梯度的多目标优化问题,旨在获得平衡ELBO与对比目标之间权衡的帕累托静止解。大量实验表明,我们的框架在主题连贯性、主题多样性和下游性能方面,持续生成表现更优的神经主题模型。