Current autoregressive language models (ARMs) achieve high accuracy but require long token sequences, making them costly. Discrete diffusion language models (DDLMs) enable parallel and flexible generation within a fixed number of steps and have recently emerged for their strong performance in complex reasoning and long-term planning tasks. We present a study exploring hybrid architectures that couple DDLMs with ARMs to assess whether their collaboration can yield complementary benefits. We first examine collaboration in text space, where one model plans the reasoning process and another executes the final answer based on that plan. We then extend this setup to latent-space communication, introducing a learned projector that maps DDLM latents into the ARM's embedding space, potentially bypassing some of the text-generation limitations of diffusion models. We find that shifting DDLM --> ARM communication from text space to latent space yields significant accuracy gains, for example increasing from 27.0% to 54.0% on DART-5 and from 0.0% to 14.0% on AIME24. We also find that combining a DDLM planner with an ARM executor can provide substantial computational savings with little to no impact on accuracy. For example, the latent-space pipeline, using 64 tokens for planning and roughly 5 for execution, surpasses Qwen3.1-7B on DART-5 and AIME, despite Qwen using 44 times more tokens. Overall, our study offers new insights into reasoning with DDLMs and highlights their potential in hybrid architectures.
翻译:当前的自回归语言模型(ARMs)虽能实现高精度,但需要较长的标记序列,导致计算成本高昂。离散扩散语言模型(DDLMs)能够在固定步数内实现并行且灵活的生成,并因其在复杂推理和长期规划任务中的优异表现而受到关注。本研究探索将DDLMs与ARMs耦合的混合架构,以评估其协作能否产生互补优势。我们首先考察文本空间中的协作,即一个模型规划推理过程,另一个模型基于该规划执行最终答案的生成。随后,我们将此设置扩展至潜在空间通信,引入一个学习的投影器,将DDLM的潜在表示映射到ARM的嵌入空间,从而可能规避扩散模型在文本生成方面的部分限制。研究发现,将DDLM与ARM的通信从文本空间转移至潜在空间能带来显著的准确率提升,例如在DART-5数据集上从27.0%提高至54.0%,在AIME24数据集上从0.0%提升至14.0%。同时,结合DDLM规划器与ARM执行器能在几乎不影响准确率的前提下大幅节省计算成本。例如,潜在空间流水线仅使用64个标记进行规划及约5个标记进行执行,便在DART-5和AIME数据集上超越了Qwen3.1-7B模型,而后者使用的标记数量是其44倍。总体而言,本研究为基于DDLMs的推理提供了新的见解,并凸显了其在混合架构中的潜力。