We present HunyuanImage 3.0, a native multimodal model that unifies multimodal understanding and generation within an autoregressive framework, with its image generation module publicly available. The achievement of HunyuanImage 3.0 relies on several key components, including meticulous data curation, advanced architecture design, a native Chain-of-Thoughts schema, progressive model pre-training, aggressive model post-training, and an efficient infrastructure that enables large-scale training and inference. With these advancements, we successfully trained a Mixture-of-Experts (MoE) model comprising over 80 billion parameters in total, with 13 billion parameters activated per token during inference, making it the largest and most powerful open-source image generative model to date. We conducted extensive experiments and the results of automatic and human evaluation of text-image alignment and visual quality demonstrate that HunyuanImage 3.0 rivals previous state-of-the-art models. By releasing the code and weights of HunyuanImage 3.0, we aim to enable the community to explore new ideas with a state-of-the-art foundation model, fostering a dynamic and vibrant multimodal ecosystem. All open source assets are publicly available at https://github.com/Tencent-Hunyuan/HunyuanImage-3.0
翻译:我们提出了 HunyuanImage 3.0,这是一个原生多模态模型,在自回归框架内统一了多模态理解与生成,其图像生成模块已公开可用。HunyuanImage 3.0 的实现依赖于几个关键组成部分,包括精细的数据策展、先进的架构设计、原生的思维链(Chain-of-Thoughts)范式、渐进式模型预训练、激进的模型后训练,以及支持大规模训练与推理的高效基础设施。凭借这些进展,我们成功训练了一个总计超过 800 亿参数的专家混合(Mixture-of-Experts, MoE)模型,在推理时每个令牌激活约 130 亿参数,使其成为迄今为止规模最大、能力最强的开源图像生成模型。我们进行了广泛的实验,在文本-图像对齐度和视觉质量方面的自动评估与人工评估结果表明,HunyuanImage 3.0 可与先前最先进的模型相媲美。通过发布 HunyuanImage 3.0 的代码与权重,我们旨在让社区能够基于这一最先进的基础模型探索新思路,从而培育一个充满活力与生机的多模态生态系统。所有开源资源均公开于 https://github.com/Tencent-Hunyuan/HunyuanImage-3.0。