Scalability has driven recent advances in generative modeling, yet its principles remain underexplored for adversarial learning. We investigate the scalability of Generative Adversarial Networks (GANs) through two design choices that have proven to be effective in other types of generative models: training in a compact Variational Autoencoder latent space and adopting purely transformer-based generators and discriminators. Training in latent space enables efficient computation while preserving perceptual fidelity, and this efficiency pairs naturally with plain transformers, whose performance scales with computational budget. Building on these choices, we analyze failure modes that emerge when naively scaling GANs. Specifically, we find issues as underutilization of early layers in the generator and optimization instability as the network scales. Accordingly, we provide simple and scale-friendly solutions as lightweight intermediate supervision and width-aware learning-rate adjustment. Our experiments show that GAT, a purely transformer-based and latent-space GANs, can be easily trained reliably across a wide range of capacities (S through XL). Moreover, GAT-XL/2 achieves state-of-the-art single-step, class-conditional generation performance (FID of 2.96) on ImageNet-256 in just 40 epochs, 6x fewer epochs than strong baselines.
翻译:可扩展性推动了生成建模的最新进展,但其在对抗学习中的原理仍未得到充分探索。我们通过两种在其他类型生成模型中已被证明有效的设计选择,研究了生成对抗网络(GANs)的可扩展性:在紧凑的变分自编码器潜在空间中训练,以及采用纯基于Transformer的生成器和判别器。在潜在空间中训练能够在保持感知保真度的同时实现高效计算,这种效率与普通Transformer自然契合,因为后者的性能随计算预算而扩展。基于这些选择,我们分析了简单扩展GANs时出现的故障模式。具体而言,我们发现生成器早期层利用不足以及网络扩展时优化不稳定等问题。相应地,我们提供了轻量级中间监督和宽度感知学习率调整等简单且对扩展友好的解决方案。实验表明,GAT——一种纯基于Transformer的潜在空间GAN——能够在广泛的容量范围(S至XL)内轻松可靠地训练。此外,GAT-XL/2在ImageNet-256上仅用40个周期(比强基线少6倍)就实现了最先进的单步、类条件生成性能(FID为2.96)。