Large language models (LLMs) are increasingly deployed as multi-step decision-making agents, where effective reward design is essential for guiding learning. Although recent work explores various forms of reward shaping and step-level credit assignment, a key signal remains largely overlooked: the intrinsic uncertainty of LLMs. Uncertainty reflects model confidence, reveals where exploration is needed, and offers valuable learning cues even in failed trajectories. We introduce SELAUR: Self Evolving LLM Agent via Uncertainty-aware Rewards, a reinforcement learning framework that incorporates uncertainty directly into the reward design. SELAUR integrates entropy-, least-confidence-, and margin-based metrics into a combined token-level uncertainty estimate, providing dense confidence-aligned supervision, and employs a failure-aware reward reshaping mechanism that injects these uncertainty signals into step- and trajectory-level rewards to improve exploration efficiency and learning stability. Experiments on two benchmarks, ALFWorld and WebShop, show that our method consistently improves success rates over strong baselines. Ablation studies further demonstrate how uncertainty signals enhance exploration and robustness.
翻译:大型语言模型(LLM)正越来越多地被部署为多步决策智能体,其中有效的奖励设计对于指导学习至关重要。尽管近期研究探索了多种形式的奖励塑形和步骤级信用分配,但一个关键信号在很大程度上被忽视了:LLM的内在不确定性。不确定性反映了模型的置信度,揭示了需要探索的区域,即使在失败的轨迹中也能提供有价值的学习线索。我们提出SELAUR:基于不确定性感知奖励的自演进LLM智能体,这是一个将不确定性直接纳入奖励设计的强化学习框架。SELAUR将基于熵、最小置信度和间隔的度量整合为统一的词元级不确定性估计,提供密集的置信度对齐监督,并采用一种失败感知的奖励重塑机制,将这些不确定性信号注入步骤级和轨迹级奖励中,以提高探索效率和学习稳定性。在ALFWorld和WebShop两个基准测试上的实验表明,我们的方法相较于强基线模型持续提升了任务成功率。消融研究进一步论证了不确定性信号如何增强探索能力和鲁棒性。