Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that this assumption does not hold for DLMs: the attention-sink position exhibits substantially higher variance over the full generation trajectory (measured by how the dominant sink locations shift across timesteps), indicating that sinks are often transient and less structurally essential than in AR models. Based on this observation, we propose ${\bf \texttt{Sink-Aware Pruning}}$, which automatically identifies and prunes unstable sinks in DLMs (prior studies usually keep sinks for AR LLMs). Without retraining, our method achieves a better quality-efficiency trade-off and outperforms strong prior pruning baselines under matched compute. Our code is available at https://github.com/VILA-Lab/Sink-Aware-Pruning.
翻译:扩散语言模型(DLMs)因迭代去噪过程导致高昂的推理成本,这促使了高效剪枝的需求。现有的剪枝启发式方法主要继承自自回归(AR)大语言模型,通常会保留注意力汇点(attention sink)令牌,因为AR汇点充当了稳定的全局锚点。我们证明这一假设对DLMs并不成立:在整个生成轨迹中,注意力汇点位置表现出显著更高的方差(通过主导汇点位置在时间步间的偏移程度来衡量),表明汇点往往是瞬态的,其结构重要性低于AR模型。基于此观察,我们提出 ${\bf \texttt{汇点感知剪枝}}$,该方法能自动识别并剪除DLMs中的不稳定汇点(先前研究通常为AR大语言模型保留汇点)。无需重新训练,我们的方法在匹配的计算量下实现了更优的质量-效率权衡,并超越了现有强剪枝基线。代码发布于 https://github.com/VILA-Lab/Sink-Aware-Pruning。