Diffusion models have achieved state-of-the-art synthesis quality on both visual and audio tasks, and recent works further adapt them to textual data by diffusing on the embedding space. In this paper, we conduct systematic studies of the optimization challenges encountered with both the embedding space and the denoising model, which have not been carefully explored. Firstly, the data distribution is learnable for embeddings, which may lead to the collapse of the embedding space and unstable training. To alleviate this problem, we propose a new objective called the anchor loss which is more efficient than previous methods. Secondly, we find the noise levels of conventional schedules are insufficient for training a desirable denoising model while introducing varying degrees of degeneration in consequence. To address this challenge, we propose a novel framework called noise rescaling. Based on the above analysis, we propose Difformer, an embedding diffusion model based on Transformer. Experiments on varieties of seminal text generation tasks show the effectiveness of the proposed methods and the superiority of Difformer over previous state-of-the-art embedding diffusion baselines.
翻译:扩散模型在视觉和音频任务中已实现了最先进的合成质量,近期研究通过将扩散过程应用于嵌入空间,进一步将其适配到文本数据领域。本文系统研究了嵌入空间与去噪模型所面临的优化挑战,该问题此前尚未得到充分探索。首先,嵌入空间的数据分布具有可学习性,这可能导致嵌入空间坍塌及训练不稳定。为缓解该问题,我们提出一种名为锚定损失的新目标函数,其效率优于现有方法。其次,我们发现传统调度策略的噪声水平不足以训练理想的去噪模型,并会导致不同程度的性能退化。针对这一挑战,我们提出名为噪声重标定的新型框架。基于上述分析,我们提出Difformer——一种基于Transformer的嵌入扩散模型。在多种经典文本生成任务上的实验表明,所提方法的有效性及其相较于此前最先进嵌入扩散基线的优越性。