Inference-time scaling has recently emerged as a powerful paradigm for improving the reasoning capability of large language models. Among various approaches, Sequential Monte Carlo (SMC) has become a particularly important framework, enabling iterative generation, evaluation, rejection, and resampling of intermediate reasoning trajectories. A central component in this process is the reward model, which evaluates partial solutions and guides the allocation of computation during inference. However, in practice, true reward models are never available. All deployed systems rely on approximate reward models, raising a fundamental question: Why and when do approximate reward models suffice for effective inference-time scaling? In this work, we provide a theoretical answer. We identify the Bellman error of the approximate reward model as the key quantity governing the effectiveness of SMC-based inference-time scaling. For a reasoning process of length $T$, we show that if the Bellman error of the approximate reward model is bounded by $O(1/T)$, then combining this reward model with SMC reduces the computational complexity of reasoning from exponential in $T$ to polynomial in $T$. This yields an exponential improvement in inference efficiency despite using only approximate rewards.
翻译:推理时扩展最近已成为提升大语言模型推理能力的一种强大范式。在各种方法中,序贯蒙特卡洛(SMC)已成为一个特别重要的框架,它能够对中间推理轨迹进行迭代生成、评估、拒绝和重采样。该过程中的一个核心组件是奖励模型,它评估部分解决方案并指导推理期间的计算分配。然而,在实践中,真实的奖励模型永远无法获得。所有已部署的系统都依赖于近似奖励模型,这引出了一个根本性问题:为什么以及何时近似奖励模型足以实现有效的推理时扩展?在本工作中,我们提供了一个理论解答。我们指出近似奖励模型的贝尔曼误差是决定基于SMC的推理时扩展有效性的关键量。对于一个长度为$T$的推理过程,我们证明,如果近似奖励模型的贝尔曼误差以$O(1/T)$为界,那么将此奖励模型与SMC结合,可将推理的计算复杂度从$T$的指数级降低到$T$的多项式级。尽管仅使用近似奖励,这仍能在推理效率上带来指数级的提升。