Large Language Models Align with the Human Brain during Creative Thinking
Abstract
Large language models show varying alignment with brain activity during creative thinking tasks, with model size and post-training objectives influencing how well their representations match neural responses in creativity-related brain networks.
Creative thinking is a fundamental aspect of human cognition, and divergent thinking-the capacity to generate novel and varied ideas-is widely regarded as its core generative engine. Large language models (LLMs) have recently demonstrated impressive performance on divergent thinking tests and prior work has shown that models with higher task performance tend to be more aligned to human brain activity. However, existing brain-LLM alignment studies have focused on passive, non-creative tasks. Here, we explore brain alignment during creative thinking using fMRI data from 170 participants performing the Alternate Uses Task (AUT). We extract representations from LLMs varying in size (270M-72B) and measure alignment to brain responses via Representational Similarity Analysis (RSA), targeting the creativity-related default mode and frontoparietal networks. We find that brain-LLM alignment scales with model size (default mode network only) and idea originality (both networks), with effects strongest early in the creative process. We further show that post-training objectives shape alignment in functionally selective ways: a creativity-optimized Llama-3.1-8B-Instruct preserves alignment with high-creativity neural responses while reducing alignment with low-creativity ones; a human behavior fine-tuned model elevates alignment with both; and a reasoning-trained variant shows the opposite pattern, suggesting chain-of-thought training steers representations away from creative neural geometry toward analytical processing. These results demonstrate that post-training objectives selectively reshape LLM representations relative to the neural geometry of human creative thought.
Community
LLM–brain alignment in creative thinking scales with model size and idea originality, peaks during early cue processing, weakens during generation, and can be selectively enhanced for highly original ideas via creativity-focused post-training.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Do Models See in Line with Human Vision? Probing the Correspondence Between LVLM Representations and EEG Signals (2026)
- CresOWLve: Benchmarking Creative Problem-Solving Over Real-World Knowledge (2026)
- When Language Models Lose Their Mind: The Consequences of Brain Misalignment (2026)
- CREATE: Testing LLMs for Associative Creativity (2026)
- From Human Cognition to Neural Activations: Probing the Computational Primitives of Spatial Reasoning in LLMs (2026)
- Language Statistics and False Belief Reasoning: Evidence from 41 Open-Weight LMs (2026)
- Left-right asymmetry in predicting brain activity from LLMs' representations emerges with their formal linguistic competence (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Sure, brain is non-linear before it gets linear. In AI we call it transformers and in brain something that comes before physical structure - the mind :)
Get this paper in your agent:
hf papers read 2604.03480 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper