Current unified multimodal models for image generation and editing typically rely on massive parameter scales (e.g., >10B), entailing prohibitive training costs and deployment footprints. In this work, we present DeepGen 1.0, a lightweight 5B unified model that achieves comprehensive capabilities competitive with or surpassing much larger counterparts. To overcome the limitations of compact models in semantic understanding and fine-grained control, we introduce Stacked Channel Bridging (SCB), a deep alignment framework that extracts hierarchical features from multiple VLM layers and fuses them with learnable 'think tokens' to provide the generative backbone with structured, reasoning-rich guidance. We further design a data-centric training strategy spanning three progressive stages: (1) Alignment Pre-training on large-scale image-text pairs and editing triplets to synchronize VLM and DiT representations, (2) Joint Supervised Fine-tuning on a high-quality mixture of generation, editing, and reasoning tasks to foster omni-capabilities, and (3) Reinforcement Learning with MR-GRPO, which leverages a mixture of reward functions and supervision signals, resulting in substantial gains in generation quality and alignment with human preferences, while maintaining stable training progress and avoiding visual artifacts. Despite being trained on only ~50M samples, DeepGen 1.0 achieves leading performance across diverse benchmarks, surpassing the 80B HunyuanImage by 28% on WISE and the 27B Qwen-Image-Edit by 37% on UniREditBench. By open-sourcing our training code, weights, and datasets, we provide an efficient, high-performance alternative to democratize unified multimodal research.
翻译:当前用于图像生成与编辑的统一多模态模型通常依赖于庞大的参数量级(例如>100亿),导致训练成本和部署开销极高。本文提出DeepGen 1.0,一个仅50亿参数的轻量级统一模型,其综合能力达到或超越了参数量大得多的同类模型。为克服紧凑模型在语义理解和细粒度控制方面的局限,我们引入了堆叠通道桥接(SCB),一种深度对齐框架,该框架从视觉语言模型(VLM)的多个层级提取层次化特征,并通过可学习的“思维令牌”将其融合,从而为生成主干网络提供结构化、富含推理信息的引导。我们进一步设计了一种以数据为中心的渐进式三阶段训练策略:(1)在大规模图文对和编辑三元组上进行对齐预训练,以同步VLM与扩散Transformer(DiT)的表征;(2)在高质量混合的生成、编辑和推理任务上进行联合监督微调,以培养全方位能力;(3)采用MR-GRPO进行强化学习,该方法融合了多种奖励函数和监督信号,在保持训练过程稳定、避免视觉伪影的同时,显著提升了生成质量以及与人类偏好的对齐度。尽管仅使用约5000万个样本进行训练,DeepGen 1.0在多个基准测试中均取得了领先性能,在WISE基准上超越800亿参数的HunyuanImage达28%,在UniREditBench基准上超越270亿参数的Qwen-Image-Edit达37%。通过开源我们的训练代码、模型权重和数据集,我们为统一多模态研究提供了一个高效、高性能的替代方案,以促进其普及化。