Skip to content

DexPoint: Generalizable Point Cloud Reinforcement Learning for Sim-to-Real Dexterous Manipulation, CoRL 2022

License

Notifications You must be signed in to change notification settings

yzqin/dexpoint-release

Repository files navigation

DexPoint: Generalizable Point Cloud Reinforcement Learning for Sim-to-Real Dexterous Manipulation

DexPoint: Generalizable Point Cloud Reinforcement Learning for Sim-to-Real Dexterous Manipulation

Yuzhe Qin*, Binghao Huang*, Zhao-Heng Yin, Hao Su, Xiaolong Wang, CoRL 2022.

DexPoint is a novel system and algorithm for RL from point cloud. This repo contains the simulated environment and training code for DexPoint.

Teaser

Bibtex

@article{dexpoint,
  title          = {DexPoint: Generalizable Point Cloud Reinforcement Learning for Sim-to-Real Dexterous Manipulation },
  author         = {Qin, Yuzhe and Huang, Binghao and Yin, Zhao-Heng and Su, Hao and Wang, Xiaolong},
  journal        = {Conference on Robot Learning (CoRL)},
  year           = {2022},
}

Installation

git clone [email protected]:yzqin/dexpoint-release.git
cd dexart-release
conda create --name dexpoint python=3.8
conda activate dexpoint
pip install -e .

Download data file for the scene from Google Drive Link. Place the day.ktx at assets/misc/ktx/day.ktx.

pip install gdown
gdown https://drive.google.com/uc?id=1Xe3jgcIUZm_8yaFUsHnO7WJWr8cV41fE

File Structure

  • dexpoint: main content for the environment, utils, and other staff needs for RL training.
  • assets: robot and object models, and other static files
  • example: entry files to learn how to use the DexPoint environment
  • docker: dockerfile that can create container to be used for headless training on server

Quick Start

Use DexPoint environment and extend it for your project

Run and explore the comments in the file below provided to familiarize yourself with the basic architecture of the DexPoint environment. Check the printed messages to understand the observation, action, camera, and speed for these environments.

The environment we used in the training of DexPoint paper can be found here in example_dexpoint_grasping.py.

Training

Download the ShapeNet models from Google Drive can place it inside the following directory dexpoint-release/assets/shapenet/.

The DexPoint repo is using the same training code as DexArt and environment interface for RL training. Please check the training code here to train DexPoint with PPO.

Acknowledgements

We would like to thank the following people for making this project possible:

Example extension of DexPoint environment framework in other project

DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects (CVPR 2023): extend DexPoint to articulated object manipulation.

From One Hand to Multiple Hands: Imitation Learning for Dexterous Manipulation from Single-Camera Teleoperation (RA-L 2022): use teleoperation for data collection in DexPoint environment.

About

DexPoint: Generalizable Point Cloud Reinforcement Learning for Sim-to-Real Dexterous Manipulation, CoRL 2022

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published