Skip to content

PyTorch implementation of the PoseCNN framework.

License

Notifications You must be signed in to change notification settings

IRVLUTD/posecnn-pytorch

Repository files navigation

posecnn-pytorch

PyTorch implementation of the PoseCNN and PoseRBPF framework.

License

PoseCNN-PyTorch is released under the NVIDIA Source Code License (refer to the LICENSE file for details).

Citation

If you find the package is useful in your research, please consider citing:

@inproceedings{xiang2018posecnn,
    Author = {Xiang, Yu and Schmidt, Tanner and Narayanan, Venkatraman and Fox, Dieter},
    Title = {PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes},
    booktitle   = {Robotics: Science and Systems (RSS)},
    Year = {2018}
}

@inproceedings{deng2019pose,
    author    = {Xinke Deng and Arsalan Mousavian and Yu Xiang and Fei Xia and Timothy Bretl and Dieter Fox},
    title     = {PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking},
    booktitle = {Robotics: Science and Systems (RSS)},
    year      = {2019}
}

Required environment

  • Ubuntu 16.04 or above
  • PyTorch 0.4.1 or above
  • CUDA 9.1 or above

Installation

Use python3 with Conda.

  1. Create a conda environment

    conda create -n posecnn
  2. Install PyTorch

  3. Install Eigen from the Github source code here

  4. Install python packages

    pip install -r requirement.txt
  5. Initialize the submodules

    git submodule update --init --recursive
  6. Install Sophus under the root folder

  7. Compile the new layers under $ROOT/lib/layers we introduce in PoseCNN

    cd $ROOT/lib/layers
    sudo python setup.py install
  8. Compile cython components

    cd $ROOT/lib/utils
    python setup.py build_ext --inplace
  9. Compile the ycb_render in $ROOT/ycb_render

    cd $ROOT/ycb_render
    sudo python setup.py develop
  10. Install ROS in conda

    conda install -c conda-forge rospkg empy

Download

  • 3D models of YCB Objects we used here (3G). Save under $ROOT/data or use a symbol link.

  • PoseCNN checkpoints from here

  • PoseRBPF checkpoints from here

  • Our real-world images with pose annotations for 20 YCB objects collected via robot interation here (53G). Check our ICRA 2020 paper for details.

Training and testing on the YCB-Video dataset

  1. Download the YCB-Video dataset from here.

  2. Create a symlink for the YCB-Video dataset

    cd $ROOT/data/YCB_Video
    ln -s $ycb_data data
  3. Training and testing on the YCB-Video dataset

    cd $ROOT
    
    # multi-gpu training, use 1 GPU or 2 GPUs since batch size is set to 2
    ./experiments/scripts/ycb_video_train.sh
    
    # testing, $GPU_ID can be 0, 1, etc.
    ./experiments/scripts/ycb_video_test.sh $GPU_ID
    

Realsense camera set up

  1. Install realsense ros package

    sudo apt install ros-noetic-realsense2-camera
  2. Install realsense SDK from here

Running on Realsense cameras or Fetch

  1. Run PoseCNN detection for YCB objects

    # run posecnn for detection (20 YCB objects), $GPU_ID can be 0, 1, etc.
    ./experiments/scripts/ros_ycb_object_test_subset_poserbpf_realsense_ycb.sh $GPU_ID $INSTANCE_ID
  2. Run PoseBPRF for YCB objects

    # $GPU_ID can be 0, 1, etc.
    ./experiments/scripts/ros_poserbpf_ycb_object_test_subset_realsense_ycb.sh $GPU_ID $INSTANCE_ID
    

Changing the set of objects to be used for pose detection:

  • See the object ordering in lib/datasets/ycb_object.py defined under YCBObject.__init__() under the var name self._classes_all
  • The order in which object names appear is used in the config file while determining which classes to track and do pose estimation for.
    • Note the python list/tuple is zero-indexed
  • Config file: experiments/cfgs/ycb_object_subset_realsense.yml
    • The objects to be used are specified under TEST.CLASSES which is tuple with indices corresponding to the above list self._classes_all
    • Example: If your object set is 003, 005, 007, then the tuple specififed in yaml file will be: [2, 4, 6].
    • You can keep TEST.SYMMETRY to be a list with all zeros and matching the length of TEST.CLASSES in the yaml file.

Running ROS Kitchen System with YCB Objects

  1. Start Kinect for tracking kitchen

    roslaunch lula_dart multi_kinect.launch
  2. Start DART

    roslaunch lula_dart kitchen_dart_kinect2.launch
  3. Run DART stitcher

    ./ros/dart_stitcher_kinect2.py 
  4. Start realsense

    roslaunch realsense2_camera rs_aligned_depth.launch tf_prefix:=measured/camera
  5. Run PoseCNN detection for YCB objects

    # run posecnn for detection (20 YCB objects and cabinet handle), $GPU_ID can be 0, 1, etc.
    ./experiments/scripts/ros_ycb_object_test_subset_poserbpf_realsense_ycb.sh $GPU_ID $INSTANCE_ID
  6. Run PoseBPRF for YCB objects

    # $GPU_ID can be 0, 1, etc.
    ./experiments/scripts/ros_poserbpf_ycb_object_test_subset_realsense_ycb.sh $GPU_ID $INSTANCE_ID
  7. (optional) Run PoseCNN detection for blocks

    # run posecnn for detecting blocks, $GPU_ID can be 0, 1, etc.
    ./experiments/scripts/ros_ycb_object_test_subset_poserbpf_realsense.sh $GPU_ID $INSTANCE_ID
  8. (optional) Run PoseBPRF for blocks

    # $GPU_ID can be 0, 1, etc.
    ./experiments/scripts/ros_poserbpf_ycb_object_test_subset_realsense.sh $GPU_ID $INSTANCE_ID

About

PyTorch implementation of the PoseCNN framework.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published