Many recent reasoning gains in large language models can be explained as distribution sharpening: biasing generation toward high-likelihood trajectories already supported by the pretrained model, rather than modifying its weights. A natural formalization is the sequence-level power distribution $π_α(y\mid x)\propto p_θ(y\mid x)^α$ ($α>1$), which concentrates mass on whole sequences instead of adjusting token-level temperature. Prior work shows that Metropolis--Hastings (MH) sampling from this distribution recovers strong reasoning performance, but at order-of-magnitude inference slowdowns. We introduce Power-SMC, a training-free Sequential Monte Carlo scheme that targets the same objective while remaining close to standard decoding latency. Power-SMC advances a small particle set in parallel, corrects importance weights token-by-token, and resamples when necessary, all within a single GPU-friendly batched decode. We prove that temperature $τ=1/α$ is the unique prefix-only proposal minimizing incremental weight variance, interpret residual instability via prefix-conditioned Rényi entropies, and introduce an exponent-bridging schedule that improves particle stability without altering the target. On MATH500, Power-SMC matches or exceeds MH power sampling while reducing latency from $16$--$28\times$ to $1.4$--$3.3\times$ over baseline decoding.
翻译:许多近期大语言模型在推理能力上的提升可解释为分布锐化:即引导生成过程偏向预训练模型已支持的高似然轨迹,而非修改其权重。一种自然的数学形式化是序列级幂分布 $π_α(y\mid x)\propto p_θ(y\mid x)^α$($α>1$),该分布将概率质量集中于完整序列而非调整词元级温度。先前研究表明,通过Metropolis-Hastings(MH)采样从该分布中抽取样本可恢复强大的推理性能,但会导致数量级级别的推理速度下降。本文提出Power-SMC,一种免训练的序贯蒙特卡洛方案,该方案在保持接近标准解码延迟的同时,以相同目标分布为采样目标。Power-SMC并行推进少量粒子集,逐词元修正重要性权重,并在必要时进行重采样,所有操作均在单个GPU友好的批处理解码过程中完成。我们证明温度 $τ=1/α$ 是唯一能最小化增量权重方差的前缀专用提议分布,通过前缀条件Rényi熵解释残余不稳定性,并引入指数桥接调度策略以在不改变目标分布的前提下提升粒子稳定性。在MATH500基准测试中,Power-SMC达到或超越了MH幂采样的性能,同时将延迟从基线解码的 $16$--$28\times$ 降低至 $1.4$--$3.3\times$。