Distilled autoregressive diffusion models facilitate real-time short video synthesis but suffer from severe error accumulation during long-sequence generation. While existing Test-Time Optimization (TTO) methods prove effective for images or short clips, we identify that they fail to mitigate drift in extended sequences due to unstable reward landscapes and the hypersensitivity of distilled parameters. To overcome these limitations, we introduce Test-Time Correction (TTC), a training-free alternative. Specifically, TTC utilizes the initial frame as a stable reference anchor to calibrate intermediate stochastic states along the sampling trajectory. Extensive experiments demonstrate that our method seamlessly integrates with various distilled models, extending generation lengths with negligible overhead while matching the quality of resource-intensive training-based methods on 30-second benchmarks.
翻译:蒸馏自回归扩散模型虽能实现实时短视频合成,但在生成长序列时存在严重的误差累积问题。现有测试时优化方法虽对图像或短视频片段有效,但我们发现其无法缓解长序列中的漂移现象,原因在于不稳定的奖励分布与蒸馏参数的超敏感性。为突破这些限制,我们提出一种免训练的替代方案——测试时校正。该方法以首帧作为稳定的参考锚点,沿采样轨迹对中间随机状态进行校准。大量实验表明,本方法可与多种蒸馏模型无缝集成,以可忽略的开销扩展生成长度,并在30秒基准测试中达到与资源密集型训练方法相当的质量水平。