While continuous diffusion models excel in modeling continuous distributions, their application to categorical data has been less effective. Recent work has shown that ratio-matching through score-entropy within a continuous-time discrete Markov chain (CTMC) framework serves as a competitive alternative to autoregressive models in language modeling. To enhance this framework, we first introduce three new theorems concerning the KL divergence between the data and learned distribution. Our results serve as the discrete counterpart to those established for continuous diffusion models and allow us to derive an improved upper bound of the perplexity. Second, we empirically show that ratio-matching performed by minimizing the denoising cross-entropy between the clean and corrupted data enables models to outperform those utilizing score-entropy with up to 10% lower perplexity/generative-perplexity, and 15% faster training steps. To further support our findings, we introduce and evaluate a novel CTMC transition-rate matrix that allows prediction refinement, and derive the analytic expression for its matrix exponential which facilitates the computation of conditional ratios thus enabling efficient training and generation.
翻译:尽管连续扩散模型在建模连续分布方面表现出色,但其在分类数据上的应用效果欠佳。近期研究表明,在连续时间离散马尔可夫链框架下,通过分数熵进行比率匹配可作为语言建模中自回归模型的有力替代方案。为增强该框架,我们首先提出了关于数据分布与学习分布之间KL散度的三个新定理。我们的结果构成了连续扩散模型相应理论的离散对应版本,并使我们能够推导出改进的困惑度上界。其次,我们通过实验证明,通过最小化干净数据与损坏数据之间的去噪交叉熵所执行的比率匹配,能使模型性能超越采用分数熵的模型,其困惑度/生成困惑度降低达10%,训练步数加快15%。为进一步验证研究结果,我们提出并评估了一种允许预测细化的新型CTMC转移速率矩阵,推导了其矩阵指数的解析表达式,该表达式促进了条件比率的计算,从而实现高效训练与生成。