Papers
arxiv:2602.10273

Power-SMC: Low-Latency Sequence-Level Power Sampling for Training-Free LLM Reasoning

Published on Mar 23
Authors:
,
,
,
,

Abstract

Power-SMC is a training-free Sequential Monte Carlo method that achieves efficient reasoning in language models by targeting sequence-level power distributions while maintaining standard decoding speed.

AI-generated summary

Many recent reasoning gains in large language models can be explained as distribution sharpening: biasing generation toward high-likelihood trajectories already supported by the pretrained model, rather than modifying its weights. A natural formalization is the sequence-level power distribution π_α(ymid x)propto p_θ(ymid x)^α (α>1), which concentrates mass on whole sequences instead of adjusting token-level temperature. Prior work shows that Metropolis--Hastings (MH) sampling from this distribution recovers strong reasoning performance, but at order-of-magnitude inference slowdowns. We introduce Power-SMC, a training-free Sequential Monte Carlo scheme that targets the same objective while remaining close to standard decoding latency. Power-SMC advances a small particle set in parallel, corrects importance weights token-by-token, and resamples when necessary, all within a single GPU-friendly batched decode. We prove that temperature τ=1/α is the unique prefix-only proposal minimizing incremental weight variance, interpret residual instability via prefix-conditioned Rényi entropies, and introduce an exponent-bridging schedule that improves particle stability without altering the target. On MATH500, Power-SMC matches or exceeds MH power sampling while reducing latency from 16--28times to 1.4--3.3times over baseline decoding. The code is available at https://github.com/ArminAzizi98/Power-SMC.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2602.10273
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.10273 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.10273 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.10273 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.