We systematically study antithetic initial noise in diffusion models, discovering that pairing each noise sample with its negation consistently produces strong negative correlation. This universal phenomenon holds across datasets, model architectures, conditional and unconditional sampling, and even other generative models such as VAEs and Normalizing Flows. To explain it, we combine experiments and theory and propose a \textit{symmetry conjecture} that the learned score function is approximately affine antisymmetric (odd symmetry up to a constant shift), supported by empirical evidence. This negative correlation leads to substantially more reliable uncertainty quantification with up to $90\%$ narrower confidence intervals. We demonstrate these gains on tasks including estimating pixel-wise statistics and evaluating diffusion inverse solvers. We also provide extensions with randomized quasi-Monte Carlo noise designs for uncertainty quantification, and explore additional applications of the antithetic noise design to improve image editing and generation diversity. Our framework is training-free, model-agnostic, and adds no runtime overhead. Code is available at https://github.com/jjia131/Antithetic-Noise-in-Diffusion-Models-page.
翻译:我们系统研究了扩散模型中的对偶初始噪声,发现将每个噪声样本与其取反样本配对能持续产生强负相关性。这一普遍现象在不同数据集、模型架构、条件与非条件采样、甚至其他生成模型(如VAE和归一化流)中均成立。为解释该现象,我们结合实验与理论提出\textit{对称性猜想}:学习得到的评分函数近似满足仿射反对称性(在常数平移意义下具有奇对称性),该猜想得到了实证证据支持。这种负相关性使得不确定性量化结果显著更可靠,置信区间宽度最高可缩减$90\%$。我们在像素级统计量估计和扩散逆求解器评估等任务中验证了这些优势。同时,我们提出了基于随机拟蒙特卡洛噪声设计的不确定性量化扩展方案,并探索了对偶噪声设计在提升图像编辑与生成多样性方面的其他应用。我们的框架无需重新训练、与模型无关且不增加运行时开销。代码发布于https://github.com/jjia131/Antithetic-Noise-in-Diffusion-Models-page。