Reinforcement learning (RL)-based enhancement of large language models (LLMs) often leads to reduced output diversity, undermining their utility in open-ended tasks like creative writing. Current methods lack explicit mechanisms for guiding diverse exploration and instead prioritize optimization efficiency and performance over diversity. This paper proposes an RL framework structured around a semi-structured long Chain-of-Thought (CoT), in which the generation process is decomposed into explicitly planned intermediate steps. We introduce a Diverse Planning Branching method that strategically introduces divergence at the planning phase based on diversity variation, alongside a group-aware diversity reward to encourage distinct trajectories. Experimental results on creative writing benchmarks demonstrate that our approach significantly improves output diversity without compromising generation quality, consistently outperforming existing baselines.
翻译:基于强化学习(RL)的大语言模型(LLM)增强方法常导致输出多样性降低,削弱了其在创意写作等开放式任务中的实用性。现有方法缺乏引导多样化探索的显式机制,往往优先考虑优化效率和性能而牺牲多样性。本文提出一种围绕半结构化长链思维(CoT)构建的强化学习框架,该框架将生成过程分解为显式规划的中间步骤。我们引入了一种多样化规划分支方法,该方法基于多样性变化在规划阶段策略性地引入分叉,并结合群体感知的多样性奖励以鼓励不同的生成轨迹。在创意写作基准测试上的实验结果表明,我们的方法在保持生成质量的同时显著提升了输出多样性,一致优于现有基线方法。