Diffusion Language Models (DLMs) have emerged as a compelling alternative to autoregressive approaches, enabling parallel text generation with competitive performance. Despite these advantages, there is a critical instability in DLMs: the moving sink phenomenon. Our analysis indicates that sink tokens exhibit low-norm representations in the Transformer's value space, and that the moving sink phenomenon serves as a protective mechanism in DLMs to prevent excessive information mixing. However, their unpredictable positions across diffusion steps undermine inference robustness. To resolve this, we propose a simple but effective extra sink token implemented via a modified attention mask. Specifically, we introduce a special token constrained to attend solely to itself, while remaining globally visible to all other tokens. Experimental results demonstrate that introducing a single extra token stabilizes attention sinks, substantially improving model performance. Crucially, further analysis confirms that the effectiveness of this token is independent of its position and characterized by negligible semantic content, validating its role as a robust and dedicated structural sink.
翻译:扩散语言模型已成为自回归方法的一种引人注目的替代方案,能够实现并行文本生成并具备有竞争力的性能。尽管存在这些优势,DLMs 中存在一个关键的不稳定性:移动汇点现象。我们的分析表明,汇点标记在 Transformer 值空间中表现出低范数表示,并且移动汇点现象在 DLMs 中作为一种保护机制,以防止过度的信息混合。然而,它们在扩散步骤间不可预测的位置削弱了推理的鲁棒性。为解决此问题,我们提出一种通过修改注意力掩码实现的简单而有效的额外汇点标记。具体而言,我们引入一个特殊标记,该标记被约束为仅关注自身,同时对所有其他标记保持全局可见。实验结果表明,引入单个额外标记能够稳定注意力汇点,从而显著提升模型性能。至关重要的是,进一步的分析证实,该标记的有效性与其位置无关,且其特征是语义内容可忽略不计,这验证了其作为鲁棒且专用的结构汇点的作用。