Mixture-of-Experts-based (MoE-based) diffusion models demonstrate remarkable scalability in high-fidelity image generation, yet their reliance on expert parallelism introduces critical communication bottlenecks. State-of-the-art methods alleviate such overhead in parallel diffusion inference through computation-communication overlapping, termed displaced parallelism. However, we identify that these techniques induce severe *staleness*-the usage of outdated activations from previous timesteps that significantly degrades quality, especially in expert-parallel scenarios. We tackle this fundamental tension and propose DICE, a staleness-centric optimization framework with a three-fold approach: (1) Interweaved Parallelism introduces staggered pipelines, effectively halving step-level staleness for free; (2) Selective Synchronization operates at layer-level and protects layers vulnerable from staled activations; and (3) Conditional Communication, a token-level, training-free method that dynamically adjusts communication frequency based on token importance. Together, these strategies effectively reduce staleness, achieving 1.26x speedup with minimal quality degradation. Empirical results establish DICE as an effective and scalable solution. Our code is publicly available at https://anonymous.4open.science/r/DICE-FF04
翻译:基于专家混合(MoE-based)的扩散模型在高保真图像生成中展现出卓越的可扩展性,但其对专家并行化的依赖引入了关键的通信瓶颈。现有最先进方法通过计算-通信重叠(称为位移并行)来缓解并行扩散推理中的此类开销。然而,我们发现这些技术会引发严重的*陈旧性*——即使用来自先前时间步的过时激活值,这会显著降低生成质量,在专家并行场景中尤为明显。我们针对这一根本性矛盾提出了DICE,一个以陈旧性为中心的优化框架,其采用三重方法:(1)交错并行引入了交错的流水线,有效将步级陈旧性减半且无需额外成本;(2)选择性同步在层级运行,保护易受陈旧激活值影响的层;(3)条件通信是一种无需训练的令牌级方法,可根据令牌重要性动态调整通信频率。这些策略共同有效地减少了陈旧性,在质量损失最小的情况下实现了1.26倍的加速。实证结果表明DICE是一种有效且可扩展的解决方案。我们的代码公开于 https://anonymous.4open.science/r/DICE-FF04。