We present JanusFlow, a powerful framework that unifies image understanding and generation in a single model. JanusFlow introduces a minimalist architecture that integrates autoregressive language models with rectified flow, a state-of-the-art method in generative modeling. Our key finding demonstrates that rectified flow can be straightforwardly trained within the large language model framework, eliminating the need for complex architectural modifications. To further improve the performance of our unified model, we adopt two key strategies: (i) decoupling the understanding and generation encoders, and (ii) aligning their representations during unified training. Extensive experiments show that JanusFlow achieves comparable or superior performance to specialized models in their respective domains, while significantly outperforming existing unified approaches across standard benchmarks. This work represents a step toward more efficient and versatile vision-language models.
翻译:本文提出JanusFlow,一个在单一模型中统一图像理解与生成的强大框架。JanusFlow引入了一种极简架构,将自回归语言模型与生成建模领域的前沿方法——整流流相结合。我们的核心发现表明,整流流可以直接在大型语言模型框架内进行训练,无需复杂的架构修改。为进一步提升统一模型的性能,我们采用两项关键策略:(i)解耦理解编码器与生成编码器;(ii)在统一训练过程中对齐二者的表征。大量实验表明,JanusFlow在各自专业领域内达到了与专用模型相当或更优的性能,同时在标准基准测试中显著超越了现有统一方法。这项工作为构建更高效、更通用的视觉-语言模型迈出了重要一步。