Discrete diffusion language models have shown strong potential for text generation, yet standard supervised fine-tuning (SFT) misaligns with their semi-autoregressive inference: training randomly masks tokens across the entire response, while inference generates fixed-size blocks sequentially. This mismatch introduces noisy prefixes and leaky suffixes, biasing gradients away from the desired blockwise likelihood. We propose Blockwise SFT, which partitions responses into fixed-size blocks, selects one active block per step for stochastic masking, freezes all preceding tokens, and fully hides future ones. Loss is computed only over the active block, directly mirroring the blockwise decoding process. Experiments on GSM8K, MATH, and MetaMathQA show consistent gains over classical SFT under equal compute or token budgets. Block size consistency studies and ablations confirm that improvements stem from faithful training-inference alignment rather than incidental masking effects. Our results highlight the importance of matching supervision granularity to the decoding procedure in diffusion-based language models.
翻译:离散扩散语言模型在文本生成方面展现出强大潜力,但标准监督微调(SFT)与其半自回归推理过程存在错配:训练时在整个响应中随机掩码标记,而推理时则按顺序生成固定大小的块。这种不匹配会引入噪声前缀和泄露后缀,使梯度偏离期望的块状似然目标。我们提出块状监督微调方法,将响应划分为固定大小的块,每步随机选择一个活动块进行随机掩码,冻结所有前驱标记并完全隐藏后续标记。损失仅针对活动块计算,直接对应块状解码过程。在GSM8K、MATH和MetaMathQA数据集上的实验表明,在相同计算量或标记预算下,该方法相较经典监督微调取得持续增益。块大小一致性研究与消融实验证实,改进源于训练-推理对齐的忠实性,而非偶然的掩码效应。我们的研究结果凸显了在基于扩散的语言模型中,使监督粒度与解码过程相匹配的重要性。