Diffusion language models enable any-order generation and bidirectional conditioning, offering appealing flexibility for tasks such as infilling, rewriting, and self-correction. However, their formulation-predicting one part of a sequence from another within a single-step dependency-limits modeling depth and often yields lower sample quality and stability than autoregressive (AR) models. To address this, we revisit autoregressive modeling as a foundation and reformulate diffusion-style training into a structured multi-group prediction process. We propose Any-order Any-subset Autoregressive modeling (A3), a generalized framework that extends the standard AR factorization to arbitrary token groups and generation orders. A3 preserves the probabilistic rigor and multi-layer dependency modeling of AR while inheriting diffusion models' flexibility for parallel and bidirectional generation. We implement A3 through a two-stream attention architecture and a progressive adaptation strategy that transitions pretrained AR models toward any-order prediction. Experiments on question answering, commonsense reasoning, and story infilling demonstrate that A3 outperforms diffusion-based models while maintaining flexible decoding. This work offers a unified approach for a flexible, efficient, and novel language modeling paradigm.
翻译:扩散语言模型支持任意顺序生成和双向条件约束,为填充、改写和自校正等任务提供了诱人的灵活性。然而,其公式——在单步依赖关系中从序列的一部分预测另一部分——限制了建模深度,且通常导致样本质量和稳定性低于自回归模型。为解决此问题,我们重新审视自回归建模作为基础框架,并将扩散式训练重新表述为结构化的多组预测过程。我们提出任意顺序任意子集自回归建模,这是一个将标准自回归分解推广至任意词元组和生成顺序的通用框架。A3 保持了自回归模型的概率严谨性和多层依赖建模能力,同时继承了扩散模型在并行和双向生成方面的灵活性。我们通过双流注意力架构和渐进适应策略实现 A3,该策略将预训练的自回归模型逐步过渡至任意顺序预测。在问答、常识推理和故事填充任务上的实验表明,A3 在保持灵活解码能力的同时,性能优于基于扩散的模型。这项工作为灵活、高效且新颖的语言建模范式提供了一种统一方法。