Reinforcement Learning with Verifiable Rewards (RLVR) demonstrates significant potential in enhancing the reasoning capabilities of Large Language Models (LLMs). However, existing RLVR methods are often constrained by issues such as coarse-grained rewards, reward noise, and inefficient exploration, which lead to unstable training and entropy collapse. To address this challenge, we propose the Intrinsic Confidence-Driven Group Relative Preference Optimization method (ICPO). The intuition behind it lies in the fact that the probabilities of an LLM generating different responses can inherently and directly reflect its self-assessment of the reasoning process. Inspired by the idea of preference modeling, ICPO calculates a preference advantage score for each response by comparing the relative generation probabilities of multiple responses under the same input prompt, and integrates this score with verifiable rewards to guide the exploration process. We have discovered that the preference advantage score not only alleviates the issues of coarse-grained rewards and reward noise but also effectively curbs overconfident errors, enhances the relative superiority of undervalued high-quality responses, and prevents the model from overfitting to specific strategies. Comprehensive experiments across four general-domain benchmarks and three mathematical benchmarks demonstrate that ICPO steadily boosts reasoning compared to GRPO.
翻译:基于可验证奖励的强化学习(RLVR)在增强大语言模型(LLMs)的推理能力方面展现出巨大潜力。然而,现有的RLVR方法通常受限于奖励粒度粗、奖励噪声大以及探索效率低等问题,导致训练不稳定和熵崩溃。为应对这一挑战,我们提出了基于内在置信度的群体相对偏好优化方法(ICPO)。其核心思想在于,LLM生成不同响应的概率本身能够直接反映其对推理过程的自我评估。受偏好建模思想的启发,ICPO通过比较同一输入提示下多个响应的相对生成概率,为每个响应计算一个偏好优势分数,并将该分数与可验证奖励相结合以指导探索过程。我们发现,偏好优势分数不仅能缓解奖励粒度粗和奖励噪声问题,还能有效抑制过度自信错误、增强被低估的高质量响应的相对优势,并防止模型对特定策略的过拟合。在四个通用领域基准和三个数学基准上的全面实验表明,与GRPO相比,ICPO能稳定提升推理能力。