Papers
arxiv:2603.09906

Thinking to Recall: How Reasoning Unlocks Parametric Knowledge in LLMs

Published on Mar 10
· Submitted by
Zorik
on Mar 11
#3 Paper of the day

Abstract

Reasoning in large language models enhances parametric knowledge recall through computational buffer and factual priming mechanisms, though it carries risks of hallucination that can be mitigated by prioritizing accurate reasoning trajectories.

AI-generated summary

While reasoning in LLMs plays a natural role in math, code generation, and multi-hop factual questions, its effect on simple, single-hop factual questions remains unclear. Such questions do not require step-by-step logical decomposition, making the utility of reasoning highly counterintuitive. Nevertheless, we find that enabling reasoning substantially expands the capability boundary of the model's parametric knowledge recall, unlocking correct answers that are otherwise effectively unreachable. Why does reasoning aid parametric knowledge recall when there are no complex reasoning steps to be done? To answer this, we design a series of hypothesis-driven controlled experiments, and identify two key driving mechanisms: (1) a computational buffer effect, where the model uses the generated reasoning tokens to perform latent computation independent of their semantic content; and (2) factual priming, where generating topically related facts acts as a semantic bridge that facilitates correct answer retrieval. Importantly, this latter generative self-retrieval mechanism carries inherent risks: we demonstrate that hallucinating intermediate facts during reasoning increases the likelihood of hallucinations in the final answer. Finally, we show that our insights can be harnessed to directly improve model accuracy by prioritizing reasoning trajectories that contain hallucination-free factual statements.

Community

Paper author Paper submitter

We study the mechanisms through which reasoning expands LLMs’ parametric recall boundary on simple factual questions that do not require step-by-step solutions.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.09906 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.09906 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.09906 in a Space README.md to link it from this page.

Collections including this paper 1