Papers
arxiv:2603.03942

Lightweight Visual Reasoning for Socially-Aware Robots

Published on Mar 4
· Submitted by
Alessio Galatolo
on Mar 6
Authors:
,
,
,
,

Abstract

A lightweight language-to-vision feedback module enhances Vision-Language Models for robotics applications by enabling contextual reinterpretation of visual scenes through gated MLP layers.

AI-generated summary

Robots operating in shared human environments must not only navigate, interact, and detect their surroundings, they must also interpret and respond to dynamic, and often unpredictable, human behaviours. Although recent advances have shown promise in enhancing robotic perception and instruction-following using Vision-Language Models (VLMs), they remain limited in addressing the complexities of multimodal human-robot interactions (HRI). Motivated by this challenge, we introduce a lightweight language-to-vision feedback module that closes the loop between an LLM and the vision encoder in VLMs. The module projects image-token hidden states through a gated Multi-Layer Perceptron (MLP) back into the encoder input, prompting a second pass that reinterprets the scene under text context. We evaluate this approach on three robotics-centred tasks: navigation in a simulated environment (Habitat), sequential scene description (Mementos-Robotics), and human-intention recognition (our HRI dataset). Results show that our method improves Qwen 2.5 (7B) by 3.3% (less distance), +0.057 description score, and +2.93% accuracy, with less than 3% extra parameters; Gemma 3 (4B) and LLaVA OV 1.5 (4B) show mixed navigation results but gains +0.111,+0.055 and +10.81%,+4.79% on the latter two tasks. Code is available at https://github.com/alessioGalatolo/VLM-Reasoning-for-Robotics

Community

Paper author Paper submitter

🤖 What if your VLM could look twice before answering?

Most VLMs encode an image once, then reason purely in text. We think that's leaving performance on the table.

We built a tiny feedback module (< 3% extra parameters) that lets the language model send a "reasoning hint" back to the vision encoder, which then re-encodes the image with that context in mind. Two passes. Much richer understanding.

The gains on tasks requiring subtle visual understanding are striking — up to +10.81% on human intention recognition, with consistent improvements on scene description too.

It works across Qwen 2.5, Gemma 3, and LLaVA OV. Trained on a general-purpose dataset.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.03942 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.03942 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.03942 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.