Pixel diffusion generates images directly in pixel space in an end-to-end manner, avoiding the artifacts and bottlenecks introduced by VAEs in two-stage latent diffusion. However, it is challenging to optimize high-dimensional pixel manifolds that contain many perceptually irrelevant signals, leaving existing pixel diffusion methods lagging behind latent diffusion models. We propose PixelGen, a simple pixel diffusion framework with perceptual supervision. Instead of modeling the full image manifold, PixelGen introduces two complementary perceptual losses to guide diffusion model towards learning a more meaningful perceptual manifold. An LPIPS loss facilitates learning better local patterns, while a DINO-based perceptual loss strengthens global semantics. With perceptual supervision, PixelGen surpasses strong latent diffusion baselines. It achieves an FID of 5.11 on ImageNet-256 without classifier-free guidance using only 80 training epochs, and demonstrates favorable scaling performance on large-scale text-to-image generation with a GenEval score of 0.79. PixelGen requires no VAEs, no latent representations, and no auxiliary stages, providing a simpler yet more powerful generative paradigm. Codes are publicly available at https://github.com/Zehong-Ma/PixelGen.
翻译:像素扩散以端到端方式直接在像素空间生成图像,避免了双阶段潜在扩散中由变分自编码器引入的伪影与瓶颈。然而,优化包含大量感知无关信号的高维像素流形具有挑战性,导致现有像素扩散方法落后于潜在扩散模型。我们提出PixelGen——一种配备感知监督的简洁像素扩散框架。该方法不直接建模完整图像流形,而是引入两种互补的感知损失引导扩散模型学习更具意义的感知流形:LPIPS损失促进局部模式学习,而基于DINO的感知损失增强全局语义理解。在感知监督下,PixelGen超越了强基准潜在扩散模型。该模型在ImageNet-256数据集上仅用80训练周期即达到5.11的FID分数(无需无分类器指导),并在大规模文本到图像生成任务中展现出优异的扩展性能(GenEval得分0.79)。PixelGen无需变分自编码器、潜在表示或辅助训练阶段,为生成模型提供了更简洁而强大的新范式。代码已公开于https://github.com/Zehong-Ma/PixelGen。