Abstract
MAGNet, a multimodal transformer-based model, enables embodied agents to navigate audio-visual environments by jointly encoding spatial and semantic goal representations while incorporating historical context and self-motion cues for memory-augmented goal reasoning.
Audio-visual navigation enables embodied agents to navigate toward sound-emitting targets by leveraging both auditory and visual cues. However, most existing approaches rely on precomputed room impulse responses (RIRs) for binaural audio rendering, restricting agents to discrete grid positions and leading to spatially discontinuous observations. To establish a more realistic setting, we introduce Semantic Audio-Visual Navigation in Continuous Environments (SAVN-CE), where agents can move freely in 3D spaces and perceive temporally and spatially coherent audio-visual streams. In this setting, targets may intermittently become silent or stop emitting sound entirely, causing agents to lose goal information. To tackle this challenge, we propose MAGNet, a multimodal transformer-based model that jointly encodes spatial and semantic goal representations and integrates historical context with self-motion cues to enable memory-augmented goal reasoning. Comprehensive experiments demonstrate that MAGNet significantly outperforms state-of-the-art methods, achieving up to a 12.1\% absolute improvement in success rate. These results also highlight its robustness to short-duration sounds and long-distance navigation scenarios. The code is available at https://github.com/yichenzeng24/SAVN-CE.
Community
This paper has been accepted to CVPR 2026
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- JAEGER: Joint 3D Audio-Visual Grounding and Reasoning in Simulated Physical Environments (2026)
- SignNav: Leveraging Signage for Semantic Visual Navigation in Large-Scale Indoor Environments (2026)
- HiMemVLN: Enhancing Reliability of Open-Source Zero-Shot Vision-and-Language Navigation with Hierarchical Memory System (2026)
- SPAN-Nav: Generalized Spatial Awareness for Versatile Vision-Language Navigation (2026)
- P$^{3}$Nav: End-to-End Perception, Prediction and Planning for Vision-and-Language Navigation (2026)
- Enhancing Vision-Language Navigation with Multimodal Event Knowledge from Real-World Indoor Tour Videos (2026)
- From Instruction to Event: Sound-Triggered Mobile Manipulation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper