Topic modeling extracts latent themes from large text collections, but leading approaches like BERTopic face critical limitations: stochastic instability, loss of lexical precision ("Embedding Blur"), and reliance on a single data perspective. We present TriTopic, a framework that addresses these weaknesses through a tri-modal graph fusing semantic embeddings, TF-IDF, and metadata. Three core innovations drive its performance: hybrid graph construction via Mutual kNN and Shared Nearest Neighbors to eliminate noise and combat the curse of dimensionality; Consensus Leiden Clustering for reproducible, stable partitions; and Iterative Refinement that sharpens embeddings through dynamic centroid-pulling. TriTopic also replaces the "average document" concept with archetype-based topic representations defined by boundary cases rather than centers alone. In benchmarks across 20 Newsgroups, BBC News, AG News, and Arxiv, TriTopic achieves the highest NMI on every dataset (mean NMI 0.575 vs. 0.513 for BERTopic, 0.416 for NMF, 0.299 for LDA), guarantees 100% corpus coverage with 0% outliers, and is available as an open-source PyPI library.
翻译:主题建模旨在从大规模文本集合中提取潜在主题,但当前主流方法(如BERTopic)面临若干关键局限:随机性导致的稳定性不足、词汇精度损失(“嵌入模糊”现象)以及依赖单一数据视角。本文提出TriTopic框架,通过融合语义嵌入、TF-IDF与元数据的三模态图解决上述问题。其性能提升源于三大核心创新:采用互惠k近邻与共享最近邻的混合图构建方法,以消除噪声并缓解维度灾难;基于共识的Leiden聚类算法,实现可复现的稳定划分;以及通过动态质心牵引机制锐化嵌入表示的迭代优化流程。此外,TriTopic以原型化主题表征取代传统的“平均文档”概念,通过边界样本而非仅依赖中心点定义主题。在20 Newsgroups、BBC News、AG News和Arxiv数据集上的实验表明,TriTopic在所有数据集上均取得最高归一化互信息值(平均NMI为0.575,对比BERTopic的0.513、NMF的0.416、LDA的0.299),保证100%语料覆盖率且零异常文档,并已作为开源PyPI库发布。