This paper presents the Text Encoding Diffusion Model (TEncDM), a novel approach to diffusion modeling that operates in the space of pre-trained language model encodings. In contrast to traditionally used embeddings, encodings integrate contextual information. In our approach, we also employ a transformer-based decoder, specifically designed to incorporate context in the token prediction process. We conduct a comprehensive examination of the influence of the encoder, decoder, noise scheduler, and self-conditioning on zero-shot generation. Furthermore, we compare TEncDM with previous approaches on three conditional text generation tasks: QQP, XSum, and Wiki-Auto. The results show that TEncDM exhibits superior performance compared to existing non-autoregressive diffusion models. Our code is available at https://github.com/M0RJIQUE/tencdm.
翻译:本文提出了一种新颖的扩散模型方法——文本编码扩散模型(TEncDM),该方法在预训练语言模型编码空间中进行操作。与传统使用的嵌入向量不同,编码整合了上下文信息。在我们的方法中,我们还采用了一种基于Transformer的解码器,该解码器专门设计用于在词元预测过程中融入上下文信息。我们对编码器、解码器、噪声调度器以及自条件化对零样本生成的影响进行了全面考察。此外,我们在三个条件文本生成任务(QQP、XSum和Wiki-Auto)上将TEncDM与先前方法进行了比较。结果表明,与现有的非自回归扩散模型相比,TEncDM展现出更优越的性能。我们的代码可在https://github.com/M0RJIQUE/tencdm获取。