Post-training pretrained Autoregressive models (ARMs) into Masked Diffusion models (MDMs) has emerged as a cost-effective strategy to overcome the limitations of sequential generation. However, the internal algorithmic transformations induced by this paradigm shift remain unexplored, leaving it unclear whether post-trained MDMs acquire genuine bidirectional reasoning capabilities or merely repackage autoregressive heuristics. In this work, we address this question by conducting a comparative circuit analysis of ARMs and their MDM counterparts. Our analysis reveals a systematic "mechanism shift" dependent on the structural nature of the task. Structurally, we observe a distinct divergence: while MDMs largely retain autoregressive circuitry for tasks dominated by local causal dependencies, they abandon initialized pathways for global planning tasks, exhibiting distinct rewiring characterized by increased early-layer processing. Semantically, we identify a transition from sharp, localized specialization in ARMs to distributed integration in MDMs. Through these findings, we conclude that diffusion post-training does not merely adapt model parameters but fundamentally reorganizes internal computation to support non-sequential global planning.
翻译:将预训练的自回归模型(ARMs)后训练为掩码扩散模型(MDMs)已成为一种克服序列生成局限性的经济有效策略。然而,这种范式转变所引发的内部算法转换机制尚未得到探索,目前尚不清楚后训练的MDMs是否真正获得了双向推理能力,还是仅仅重新包装了自回归启发式方法。在本研究中,我们通过对ARMs及其对应的MDMs进行对比性电路分析来探讨这一问题。我们的分析揭示了一种依赖于任务结构性质的系统性“机制转变”。在结构上,我们观察到明显的分化:对于主要由局部因果依赖主导的任务,MDMs在很大程度上保留了自回归电路;而对于全局规划任务,它们则放弃了初始化路径,表现出以增强早期层处理为特征的显著重新布线。在语义上,我们识别出从ARMs中尖锐、局部化的特化到MDMs中分布式整合的转变。基于这些发现,我们得出结论:扩散后训练不仅调整了模型参数,而且从根本上重组了内部计算机制,以支持非序列化的全局规划。