In this paper, we introduce PixArt-\Sigma, a Diffusion Transformer model~(DiT) capable of directly generating images at 4K resolution. PixArt-\Sigma represents a significant advancement over its predecessor, PixArt-\alpha, offering images of markedly higher fidelity and improved alignment with text prompts. A key feature of PixArt-\Sigma is its training efficiency. Leveraging the foundational pre-training of PixArt-\alpha, it evolves from the `weaker' baseline to a `stronger' model via incorporating higher quality data, a process we term "weak-to-strong training". The advancements in PixArt-\Sigma are twofold: (1) High-Quality Training Data: PixArt-\Sigma incorporates superior-quality image data, paired with more precise and detailed image captions. (2) Efficient Token Compression: we propose a novel attention module within the DiT framework that compresses both keys and values, significantly improving efficiency and facilitating ultra-high-resolution image generation. Thanks to these improvements, PixArt-\Sigma achieves superior image quality and user prompt adherence capabilities with significantly smaller model size (0.6B parameters) than existing text-to-image diffusion models, such as SDXL (2.6B parameters) and SD Cascade (5.1B parameters). Moreover, PixArt-\Sigma's capability to generate 4K images supports the creation of high-resolution posters and wallpapers, efficiently bolstering the production of high-quality visual content in industries such as film and gaming.
翻译:本文提出PixArt-Σ,一种能够直接生成4K分辨率图像的扩散Transformer模型(DiT)。PixArt-Σ在先前工作PixArt-α基础上实现显著突破,生成图像在保真度和文本提示对齐方面均大幅提升。其核心优势在于训练效率:通过继承PixArt-α的预训练基础,模型从“较弱”基线进化为“更强”模型——这一过程得益于高质量数据的融入,我们称之为“弱到强训练”。PixArt-Σ的改进体现在两方面:(1) 高质量训练数据:模型采用更优质的图像数据,并配以更精准细致的图像描述;(2) 高效令牌压缩:我们在DiT框架中提出新型注意力模块,通过压缩键和值,显著提升效率并促进超高清图像生成。得益于这些改进,PixArt-Σ在仅0.6B参数量的条件下,实现了优于现有文生图扩散模型(如SDXL的2.6B参数和SD Cascade的5.1B参数)的图像质量与用户提示遵循能力。此外,其4K图像生成能力可支持高分辨率海报与壁纸创作,高效赋能影视、游戏等行业的高质量视觉内容生产。