Skip to content

Hierarchical Spatial Proximity Reasoning for Vision-and-Language Navigation

Notifications You must be signed in to change notification settings

XieZilongAI/HSPR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Most Vision-and-Language Navigation (VLN) algorithms tend to make decision errors, primarily due to a lack of visual common sense and insufficient reasoning capabilities. To address this issue, this paper proposes a Hierarchical Spatial Proximity Reasoning (HSPR) model. Firstly, we design a Scene Understanding Auxiliary Task (SUAT) to assist the agent in constructing a knowledge base of hierarchical spatial proximity for reasoning navigation. Specifically, this task utilizes panoramic views and object features to identify regions in the navigation environment and uncover the adjacency relationships between regions, objects, and region-object pairs. Secondly, we dynamically construct a semantic topological map through agent-environment interactions and propose a Multi-step Reasoning Navigation Algorithm (MRNA) based on the map. This algorithm continuously plans various feasible paths from one region to another, utilizing the constructed proximity knowledge base, enabling more efficient exploration. Additionally, we introduce a Proximity Adaptive Attention Module (PAAM) and Residual Fusion Method (RFM) to enable the model to obtain more accurate navigation decision confidence. Finally, we conduct experiments on publicly available datasets including REVERIE, SOON, R2R, and R4R to validate the effectiveness of the proposed approach.

framework

Requirements

  1. Install Matterport3D simulators: follow instructions here. We use the latest version instead of v0.1.
export PYTHONPATH=Matterport3DSimulator/build:$PYTHONPATH
  1. Install python>=3.8, pytorch==1.7.1.
conda create --name hspr python=3.8.5
conda activate hspr
# conda
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch
# pip
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
  1. Install requirements.
pip install -r requirements.txt
  1. Download data from Dropbox, including processed annotations, features and pretrained models of REVERIE, SOON, R2R and R4R datasets, download labels about the navigation environment from Baidu Netdisk, and put it all into the 'datasets' directory.

  2. Download pretrained lxmert.

mkdir -p datasets/pretrained 
wget https://nlp.cs.unc.edu/data/model_LXRT.pth -P datasets/pretrained

Pretraining

Combine behavior cloning and auxiliary proxy tasks in pretraining.

cd pretrain_auxiliary_src
bash run_reverie.sh # (run_soon.sh, run_r2r.sh, run_r4r.sh)

Fine-tuning & Evaluation

Use pseudo interative demonstrator to fine-tune the model.

cd reasoning_nav_src
bash scripts/run_reverie.sh # (run_soon.sh, run_r2r.sh)

About

Hierarchical Spatial Proximity Reasoning for Vision-and-Language Navigation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published