Autoregressive (AR) models excel at generating temporally coherent audio by producing tokens sequentially, yet they often falter in faithfully following complex textual prompts, especially those describing complex sound events. We uncover a surprising capability in AR audio generators: their early prefix tokens implicitly encode global semantic attributes of the final output, such as event count and sound-object category, revealing a form of implicit planning. Building on this insight, we propose Plan-Critic, a lightweight auxiliary model trained with a Generalized Advantage Estimation (GAE)-inspired objective to predict final instruction-following quality from partial generations. At inference time, Plan-Critic enables guided exploration: it evaluates candidate prefixes early, prunes low-fidelity trajectories, and reallocates computation to high-potential planning seeds. Our Plan-Critic-guided sampling achieves up to a 10-point improvement in CLAP score over the AR baseline-establishing a new state of the art in AR text-to-audio generation-while maintaining computational parity with standard best-of-N decoding. This work bridges the gap between causal generation and global semantic alignment, demonstrating that even strictly autoregressive models can plan ahead.
翻译:自回归(AR)模型通过顺序生成音频标记,在生成时间连贯的音频方面表现出色,但在忠实遵循复杂文本提示(尤其是描述复杂声音事件的提示)方面往往存在不足。我们发现自回归音频生成器具备一项令人惊讶的能力:其早期前缀标记隐式编码了最终输出的全局语义属性(如事件数量和声音对象类别),揭示了一种隐式规划形式。基于这一发现,我们提出Plan-Critic——一个轻量级辅助模型,其训练目标受广义优势估计(GAE)启发,旨在根据部分生成结果预测最终指令遵循质量。在推理阶段,Plan-Critic可实现引导式探索:它早期评估候选前缀,剪枝低保真度轨迹,并将计算资源重新分配给高潜力的规划种子。我们的Plan-Critic引导采样方法在CLAP分数上比自回归基线提升高达10分,确立了自回归文本到音频生成的新技术标杆,同时保持与标准N选最优解码相当的计算开销。这项研究弥合了因果生成与全局语义对齐之间的鸿沟,证明即使是严格的自回归模型也能进行前瞻性规划。