Reinforcement learning (RL) has garnered increasing attention in text-to-image (T2I) generation. However, most existing RL approaches are tailored to either diffusion models or autoregressive models, overlooking an important alternative: masked generative models. In this work, we propose Mask-GRPO, the first method to incorporate Group Relative Policy Optimization (GRPO)-based RL into this overlooked paradigm. Our core insight is to redefine the transition probability, which is different from current approaches, and formulate the unmasking process as a multi-step decision-making problem. To further enhance our method, we explore several useful strategies, including removing the KL constraint, applying the reduction strategy, and filtering out low-quality samples. Using Mask-GRPO, we improve a base model, Show-o, with substantial improvements on standard T2I benchmarks and preference alignment, outperforming existing state-of-the-art approaches. The code is available on https://github.com/xingzhejun/Mask-GRPO
翻译:强化学习(RL)在文本到图像(T2I)生成领域受到越来越多的关注。然而,现有的大多数RL方法主要针对扩散模型或自回归模型进行定制,忽视了一个重要的替代方案:掩码生成模型。在本工作中,我们提出了Mask-GRPO,这是首个将基于组相对策略优化(GRPO)的RL融入这一被忽视范式的方法。我们的核心见解是重新定义转移概率(这与当前方法不同),并将去掩码过程建模为一个多步决策问题。为了进一步增强我们的方法,我们探索了几种有效的策略,包括移除KL约束、应用约简策略以及过滤低质量样本。通过使用Mask-GRPO,我们改进了基础模型Show-o,在标准T2I基准测试和偏好对齐方面取得了显著提升,性能超越了现有的最先进方法。代码可在 https://github.com/xingzhejun/Mask-GRPO 获取。