MY ALT TEXT

Overview of VLN-R1. Previous LLM/LVLM models were based on discrete positions and used a third-person perspective for path planning. In contrast, VLN-R1 directly explores in a continuous environment using first-person perspective videos. We train the LVLM using Supervised Fine-Tuning (SFT) and Reinforcement Fine-Tuning (RFT).

Abstract

Vision-Language Navigation (VLN) is a core challenge in embodied AI, requiring agents to navigate real-world environments using natural language instructions. Current language model-based navigation systems operate on discrete topological graphs, limiting path planning to predefined node connections. We propose VLN-R1, an end-to-end framework that leverages Large Vision-Language Models (LVLM) to directly translate egocentric video streams into continuous navigation actions, adopting GRPO-based training inspired by DeepSeek-R1. To enable effective training, we first construct the VLN-Ego dataset using a 3D simulator, i.e., Habitat, and propose Long-Short Memory Sampling to balance historical and current observations. While large language models can supervise complete textual instructions, they lack fine-grained action-level control. Our framework employs a two-stage training approach: a) Supervised fine-tuning (SFT) to align the model's action sequence text predictions with expert demonstrations, followed by b) Reinforcement fine-tuning (RFT) enhanced with a Time-Decayed Reward (TDR) mechanism that strategically weights multi-step future actions. Experimental results show VLN-R1 achieves strong performance on the VLN-CE benchmark. VLN-R1 proves LVLMs can drive embodied navigation and enhance task-specific reasoning through data-efficient, reward-driven post-training.

Model Architecture of VLN-R1

MY ALT TEXT

VLN-R1 employs a Long-Short Memory approach for processing visual inputs. The training consists of two stages. During the supervised fine-tuning (SFT) stage, we only supervise the output text. In the reinforcement fine-tuning (RFT) stage, we implement supervision using a designed Time-Decayed Reward (TDR) mechanism.

Dataset Construction: VLN-Ego

MY ALT TEXT

Data Engine: VLN-Ego. We created a dataset named VLN-Ego for LVLM-based navigation using Habitat's virtual simulation engine. Its textual annotations primarily consist of three parts: Instruction Part, Vision Part, and Action Part.

Experimental Results

Demo 1
Demo 2
Demo 3
Demo 4

Reference

      
@article{VLNR1,
title={VLN-R1: Vision-Language Navigation via Reinforcement Fine-Tuning},
author={Zhangyang Qi and Zhixiong Zhang and Yizhou Yu and Jiaqi Wang and Hengshuang Zhao},
journal={arXiv: 2506.17221},
year={2025}}