Reinforcement learning from human feedback (RLHF) shows promise for aligning diffusion and flow models, yet policy optimization methods such as GRPO suffer from inefficient and static sampling strategies. These methods treat all prompts and denoising steps uniformly, ignoring substantial variations in sample learning value as well as the dynamic nature of critical exploration moments. To address this issue, we conduct a detailed analysis of the internal attention dynamics during GRPO training and uncover a key insight: attention entropy can serve as a powerful dual-signal proxy. First, across different samples, the relative change in attention entropy (ΔEntropy), which reflects the divergence between the current policy and the base policy, acts as a robust indicator of sample learning value. Second, during the denoising process, the peaks of absolute attention entropy (Entropy(t)), which quantify attention dispersion, effectively identify critical timesteps where high-value exploration occurs. Building on this observation, we propose Adaptive Entropy-Guided Policy Optimization (AEGPO), a novel dual-signal, dual-level adaptive optimization strategy. At the global level, AEGPO uses ΔEntropy to dynamically allocate rollout budgets, prioritizing prompts with higher learning value. At the local level, it exploits the peaks of Entropy(t) to guide exploration selectively at critical high-dispersion timesteps rather than uniformly across all denoising steps. By focusing computation on the most informative samples and the most critical moments, AEGPO enables more efficient and effective policy optimization. Experiments on text-to-image generation tasks demonstrate that AEGPO significantly accelerates convergence and achieves superior alignment performance compared to standard GRPO variants.
翻译:基于人类反馈的强化学习(RLHF)在扩散模型与流模型的对齐任务中展现出潜力,然而诸如GRPO等策略优化方法受限于低效且静态的采样策略。这些方法对所有提示词和去噪步骤进行均一化处理,忽视了样本学习价值的显著差异以及关键探索时刻的动态特性。为解决此问题,我们对GRPO训练过程中的内部注意力动态进行了细致分析,并发现一个关键洞见:注意力熵可作为有效的双信号代理指标。首先,在不同样本间,反映当前策略与基线策略差异的相对注意力熵变化量(ΔEntropy)能够作为样本学习价值的稳健指示器。其次,在去噪过程中,表征注意力分散程度的绝对注意力熵峰值(Entropy(t))可有效识别发生高价值探索的关键时间步。基于此发现,我们提出自适应熵引导策略优化(AEGPO)——一种新颖的双信号双层级自适应优化策略。在全局层面,AEGPO利用ΔEntropy动态分配计算资源,优先处理具有更高学习价值的提示词。在局部层面,该方法通过捕捉Entropy(t)的峰值,选择性地在关键高分散度时间步进行探索,而非均匀覆盖所有去噪步骤。通过将计算资源集中于信息量最大的样本与最关键的时刻,AEGPO实现了更高效、更有效的策略优化。在文生图任务上的实验表明,相较于标准GRPO变体,AEGPO能显著加速收敛速度并获得更优的对齐性能。