Recently, discrete diffusion language models have demonstrated promising results in NLP. However, there has been limited research on integrating Pretrained Language Models (PLMs) into discrete diffusion models, resulting in underwhelming performance in downstream NLP generation tasks. This integration is particularly challenging because of the discrepancy between step-wise denoising strategy of diffusion models and single-step mask prediction approach of MLM-based PLMs. In this paper, we introduce Diffusion-EAGS, a novel approach that effectively integrates PLMs with the diffusion models. Furthermore, as it is challenging for PLMs to determine where to apply denoising during the diffusion process, we integrate an entropy tracking module to assist them. Finally, we propose entropy-based noise scheduling in the forward process to improve the effectiveness of entropy-adaptive sampling throughout the generation phase. Experimental results show that Diffusion-EAGS outperforms existing diffusion baselines in downstream generation tasks, achieving high text quality and diversity with precise token-level control. We also show that our model is capable of adapting to bilingual and low-resource settings, which are common in real-world applications.
翻译:近年来,离散扩散语言模型在自然语言处理领域展现出良好的性能。然而,将预训练语言模型(PLMs)整合到离散扩散模型中的研究仍较为有限,导致其在自然语言处理下游生成任务中的表现不尽如人意。这一整合尤为困难,原因在于扩散模型的逐步去噪策略与基于掩码语言建模的预训练语言模型的单步掩码预测方法之间存在差异。本文提出了一种名为Diffusion-EAGS的新方法,能够有效将预训练语言模型与扩散模型相结合。此外,由于预训练语言模型在扩散过程中难以确定应在何处进行去噪,我们引入了一个熵追踪模块来辅助其决策。最后,我们在前向过程中提出了基于熵的噪声调度策略,以提升生成阶段熵自适应采样的有效性。实验结果表明,Diffusion-EAGS在下游生成任务中超越了现有的扩散基线模型,在实现精确的词元级控制的同时,获得了高质量的文本输出与良好的多样性。我们还证明,该模型能够适应现实应用中常见的双语及低资源场景。