Skip to content

Official code for [DeformPAM: Data-Efficient Learning for Long-horizon Deformable Object Manipulation via Preference-based Action Alignment]

Notifications You must be signed in to change notification settings

xiaoxiaoxh/DeformPAM

Repository files navigation

🦾DeformPAM: Data-Efficient Learning for
Long-horizon Deformable Object Manipulation via
Preference-based Action Alignment

Wendi Chen1*, Han Xue1*, Fangyuan Zhou1, Yuan Fang1, Cewu Lu1
1Shanghai Jiao Tong University, * indicates equal contribution

arXiv      project website      Hugging Face      Google Drive      powered by Pytorch     

News

  • 2024.10: We release the pretrained models on Google Drive.
  • 2024.10: We release the data on Hugging Face.
  • 2024.10: We release the code of DeformPAM.

Contents

📄 Introduction

Key Idea

To quickly grasp the concept of DeformPAM, you may refer to predict() method in learning.net.primitive_diffusion::PrimitiveDiffusion.

Motivation

In long-horizon manipulation tasks, a probabilistic policy may encounter distribution shifts when imperfect policy fitting leads to unseen states. As time progresses, the deviation from the expert policy becomes more significant. Our framework employs Reward-guided Action Selection (RAS) to reassess sampled actions from the generative policy model, thereby improving overall performance.

teaser

Method

pipeline

  • In stage ①, we assign actions for execution and annotate auxiliary actions for supervised learning in a real-world environment and train a supervised primitive model based on Diffusion.
  • In stage ②, we deploy this model in the environment to collect preference data composed of annotated and predicted actions. These data are used to train a DPO-finetuned model.
  • During stage ③ (inference), we utilize the supervised model to predict actions and employ an implicit reward model derived from two models for Reward-guided Action Selection (RAS). The action with the highest reward is regarded as the final prediction.

Tasks and Hardware Setup

The following figure illustrates the object states and primitives of each task. Beginning with a random complex state of an object, multiple steps of action primitives are performed to gradually achieve the target state.

Tasks and Primitives

Here is the hardware setup and tools used in our real-world experiments. Devices and tools marked with DP are not used in primitive-based methods.

Hardware Setup

⚙️ Environment Setup

🧠 Learning Environment

The learning code should work on environments that meet the following requirements:

  • Modern Linux Distributions that are not in EOL.
  • Python >= 3.8
  • Pytorch >= 1.11.0
  • CUDA >= 11.3

We recommend these combinations:

  • Ubuntu 20.04
  • Python = 3.8
  • Pytorch = 1.11.0
  • CUDA = 11.3

To setup the learning environment, you need to download and install CUDA from here in advance. Then, you should run the setup-env.sh script to setup all basic requirments except for GroundedSAM.

bash setup-env.sh

This script will automatically create an conda environment named DeformPAM and install dependent packages in it. You can modify this script to make it behaves differently.

Finally, see GroundedSAM for installation of Grounded-DINO and Segment-Anything.

🤖 Real Environment

📷 Camera

Our project should work on any commercial 3D cameras systems that produce colorful point cloud and RGB images. However, for the best performance, we recommend high-precision and high-resolution 3D cameras. In our experiment, we adopt Photoneo MotionCam3D M+ and Mech-Mind Mech-Eye LSR L as the main 3D camera. However, if you are using custom cameras, please re-implement get_obs() method in manipulation.experiment_real::ExperimentReal.

Please generate the calibration files and set the CALIBRATION_PATH in Makefile. You can take tools/handeye_cali.py and tools/find_world_transform_from_robot_cali.py for reference.

🦾 Robot

Our experiments are conducted using two Flexiv Rizon 4 robot arms through Flexiv RDK. Please re-implement controller.robot_actuator::RobotActuator, controller.atom_controller::AtomController, and controller.controller::Controller, whether you are using Flexiv or custom robot arms .

📦 Miscs

🔧 Tools

Please refer to tools/data_management/README.md for setting up the data management tools.

📕 Usage

🔍 Inference

You can modify the TASK_TYPE, SUPERVISED_MODEL_CKPT_PATH, and TEST_MODEL_CKPT_PATH in Makefile and run the following command to conduct inference on the real-world environment. The pre-trained models can be downloaded from Google Drive.

make test_real

📚 Train Your Own Model

The training pipeline includes 2 stages, all wrapped as Makefile targets. You can download the data from Hugging Face or collect your own data according to the following instructions.

Stage 1 (Supervised Learning)

Set up the TASK_TYPE and TASK_VERSION in Makefile and run the following commands:

# Stage 1.1: collect supervised data
make supervised.run_real
# Stage 1.2: annotate supervised data
make scripts.run_supervised_annotation
# Stage 1.3: train supervised model
make supervised.train_real

Stage 2 (Preference Learning)

Set up the SUPERVISED_MODEL_CKPT_PATH in Makefile to the path of the trained model in stage 1. Then run the following commands:

# Stage 2.1: collect on-policy data
make finetune.run_real
# Stage 2.2: train preference model
make scripts.run_finetune_sort_annotation
# Stage 2.3: train DPO-finetuned model
make finetune.train_real

🙏 Acknowledgement

The motion primitives, data annotation tool, and some useful code used in our project are adapted from UniFolding.

🔗 Citation

If you find this work helpful, please consider citing:

@article{chen2024deformpam,
  title     = {DeformPAM: Data-Efficient Learning for Long-horizon Deformable Object Manipulation via Preference-based Action Alignment},
  author    = {Chen, Wendi and Xue, Han and Zhou, Fangyuan and Fang, Yuan and Lu, Cewu},
  journal   = {arXiv preprint arXiv:2410.11584},
  year      = {2024}
}

About

Official code for [DeformPAM: Data-Efficient Learning for Long-horizon Deformable Object Manipulation via Preference-based Action Alignment]

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages