Papers
arxiv:2605.10762

GridProbe: Posterior-Probing for Adaptive Test-Time Compute in Long-Video VLMs

Published on May 11
· Submitted by
Mohamed Eltahir
on May 12
Authors:
,
,
,
,

Abstract

GridProbe enables efficient long-video understanding by adaptively selecting relevant frames using a frozen VLM's reasoning, achieving sub-quadratic attention cost with minimal accuracy loss through shape-adaptive selection and interpretable importance maps.

AI-generated summary

Long-video understanding in VLMs is bottlenecked by a single monolithic forward pass over thousands of frames at quadratic attention cost. A common mitigation is to first select a small subset of informative frames before the forward pass; common for training-free selectors via auxiliary encoder-space similarities. Such signals are capped by contrastive pretraining, which usually fails on reasoning-heavy queries (negation, cross-frame counting, holistic summarization). We propose GridProbe, an efficient training-free posterior-probing inference paradigm that scores evidence in answer space using a frozen VLM's own reasoning and then selects question-relevant frames adaptively, resulting in sub-quadratic attention cost with little to no accuracy loss. We arrange frames on a K{times}K grid and run lightweight row R and column C probes, where each probe reads its peak posterior as a query-conditioned confidence. The outer product of R and C yields an interpretable importance map whose skewness and kurtosis drive Shape-Adaptive Selection, a closed-form rule that reliably replaces the fixed frame budget M with a per-question M_{eff}. We show empirically that M_{eff} tracks intrinsic question difficulty without ever seeing the answer, a sign of test-time adaptive compute. On Video-MME-v2, GridProbe matches the monolithic baseline within 1.6 pp Avg Acc at 3.36times TFLOPs reduction, while on LongVideoBench it Pareto-dominates the baseline (+0.9 pp at 0.35times compute). Because the selector and QA models can be decoupled, pairing a small 2B selector with a stronger 4B or 8B QA is strictly Pareto-dominant over the 2B monolithic baseline (up to +4.0 pp at 0.52times compute, on average), with no retraining. Finally, the interpretability of the importance maps opens future avenues for behavioral diagnostics, grounding, and frame-selection distillation.

Community

GridProbe: Posterior-Probing for Adaptive Test-Time Compute in Long-Video VLMs

🔵TL;DR: new sub-quadratic training-free inference method for video VLMs.
➡️3.36× less compute, no accuracy loss, fully interpretable on the cell-level, adaptive frames selection using VLM reasoning rather than CLIP-like tricks!

🔥Spoiler alert: Surprisingly, question difficulty alone tells you how many frames a long-video VLM needs to answer it! We turn this into a closed-form rule!!

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.10762
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.10762 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.10762 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.10762 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.