Multimodal autoregressive (AR) models, based on next-token prediction and transformer architecture, have demonstrated remarkable capabilities in various multimodal tasks including text-to-image (T2I) generation. Despite their strong performance in general T2I tasks, our research reveals that these models initially struggle with subject-driven image generation compared to dominant diffusion models. To address this limitation, we introduce Proxy-Tuning, leveraging diffusion models to enhance AR models' capabilities in subject-specific image generation. Our method reveals a striking weak-to-strong phenomenon: fine-tuned AR models consistently outperform their diffusion model supervisors in both subject fidelity and prompt adherence. We analyze this performance shift and identify scenarios where AR models excel, particularly in multi-subject compositions and contextual understanding. This work not only demonstrates impressive results in subject-driven AR image generation, but also unveils the potential of weak-to-strong generalization in the image generation domain, contributing to a deeper understanding of different architectures' strengths and limitations.
翻译:基于下一词元预测与Transformer架构的多模态自回归模型,已在包括文本到图像生成在内的多种多模态任务中展现出卓越能力。尽管在通用文本到图像任务中表现强劲,我们的研究发现,相较于主流的扩散模型,此类模型在面向主体的图像生成任务上初期表现欠佳。为克服这一局限,我们提出了代理调优方法,利用扩散模型来增强自回归模型在主体特定图像生成方面的能力。我们的方法揭示了一种显著的弱到强现象:经过微调的自回归模型在主体保真度与提示遵循度上均持续超越其作为监督源的扩散模型。我们分析了这一性能转变,并识别出自回归模型表现尤为突出的场景,特别是在多主体组合与上下文理解方面。这项工作不仅展示了面向主体的自回归图像生成方面的显著成果,同时揭示了图像生成领域中弱到强泛化的潜力,有助于更深入地理解不同架构的优势与局限。