This paper presents the Text Encoding Diffusion Model (TEncDM), a novel approach to diffusion modeling that operates in the space of pre-trained language model encodings. In contrast to traditionally used embeddings, encodings integrate contextual information. In our approach, we also employ a transformer-based decoder, specifically designed to incorporate context in the token prediction process. We conduct a comprehensive examination of the influence of the encoder, decoder, noise scheduler, and self-conditioning on zero-shot generation. Furthermore, we compare TEncDM with previous approaches on three conditional text generation tasks: QQP, XSum, and Wiki-Auto. The results show that TEncDM exhibits superior performance compared to existing non-autoregressive diffusion models. Our code is available at https://github.com/M0RJIQUE/tencdm.
翻译:本文提出了文本编码扩散模型(TEncDM),这是一种在预训练语言模型编码空间中进行扩散建模的新方法。与传统使用的嵌入向量相比,编码整合了上下文信息。在我们的方法中,我们还采用了一个基于Transformer的解码器,该解码器专门设计用于在词元预测过程中融入上下文信息。我们全面考察了编码器、解码器、噪声调度器以及自条件化对零样本生成的影响。此外,我们在三个条件文本生成任务(QQP、XSum和Wiki-Auto)上将TEncDM与先前方法进行了比较。结果表明,与现有的非自回归扩散模型相比,TEncDM展现出更优越的性能。我们的代码可在 https://github.com/M0RJIQUE/tencdm 获取。