Test-time scaling has emerged as a critical avenue for enhancing the reasoning capabilities of Large Language Models (LLMs). Though the straight-forward ''best-of-$N$'' (BoN) strategy has already demonstrated significant improvements in performance, it lacks principled guidance on the choice of $N$, budget allocation, and multi-stage decision-making, thereby leaving substantial room for optimization. While many works have explored such optimization, rigorous theoretical guarantees remain limited. In this work, we propose new methodologies to predict and improve scaling properties via tail-guided search. By estimating the tail distribution of rewards, our method predicts the scaling law of LLMs without the need for exhaustive evaluations. Leveraging this prediction tool, we introduce Scaling-Law Guided (SLG) Search, a new test-time algorithm that dynamically allocates compute to identify and exploit intermediate states with the highest predicted potential. We theoretically prove that SLG achieves vanishing regret compared to perfect-information oracles, and achieves expected rewards that would otherwise require a polynomially larger compute budget required when using BoN. Empirically, we validate our framework across different LLMs and reward models, confirming that tail-guided allocation consistently achieves higher reward yields than Best-of-$N$ under identical compute budgets. Our code is available at https://github.com/PotatoJnny/Scaling-Law-Guided-search.
翻译:测试时缩放已成为增强大型语言模型推理能力的关键途径。尽管直接的"最佳-N"策略已展现出显著的性能提升,但其在N的选择、预算分配和多阶段决策方面缺乏原则性指导,因而存在巨大的优化空间。虽然已有诸多研究探索此类优化,但严格的理论保证仍然有限。本研究提出通过尾部引导搜索预测和改进缩放特性的新方法。通过估计奖励的尾部分布,我们的方法无需穷举评估即可预测LLM的缩放定律。利用这一预测工具,我们提出缩放定律引导搜索——一种新的测试时算法,能够动态分配计算资源以识别并利用具有最高预测潜力的中间状态。我们从理论上证明,相较于完美信息预言机,SLG实现了可忽略的遗憾度,并且达到了原本需要多项式级更大计算预算(使用BoN时)才能获得的期望奖励。通过在不同LLM和奖励模型上的实证验证,我们证实了在相同计算预算下,尾部引导分配始终比最佳-N策略获得更高的奖励收益。代码发布于https://github.com/PotatoJnny/Scaling-Law-Guided-search。