This repository provides the code for training VCGS, a position-constrained generative 6-DoF grasp sampler, from the paper Constrained Generative Sampling of 6-DoF Grasps.
This code has been tested with Python 3.6, PyTorch 1.4, and CUDA 10.0 on Ubuntu 18.04. To install do
-
pip3 install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
-
Clone this repository:
git clone [email protected]:jsll/pytorch_6dof-graspnet.git
. -
Clone pointnet++:
[email protected]:erikwijmans/Pointnet2_PyTorch.git
. -
Run
cd Pointnet2_PyTorch && pip3 install -r requirements.txt
-
cd pytorch_6dof-graspnet
-
Run
pip3 install -r requirements.txt
to install the necessary Python libraries. -
(Optional) Download the trained models either by running
sh checkpoints/download_models.sh
or manually from here. Trained models are released under CC-BY-NC-SA 2.0.
You have three options to get the dataset:
- Run the following script
bash dataset/download_dataset.sh
- Download it by clicking this link.
- Go here and download it manually.
The default position to place the dataset is in the /dataset/
folder.
To generate a new CONG dataset, please use this code.
To train the position-constrained grasp sampler or the evaluator with bare minimum configurations, run:
python3 train.py
To train the evaluator, run:
python3 train.py
If this code is useful in your research, please consider citing:
@article{lundell2023constrained,
title={Constrained generative sampling of 6-dof grasps},
author={Lundell, Jens and Verdoja, Francesco and Le, Tran Nguyen and Mousavian, Arsalan and Fox, Dieter and Kyrki, Ville},
journal={arXiv preprint arXiv:2302.10745},
year={2023}
}
The source code is released under MIT License and the trained weights are released under CC-BY-NC-SA 2.0.