Simulation and Reinforcement Learning for Reacher Robot
- Mac
- Linux
- Windows (untested, not recommended)
Install xcode command line tools.
xcode-select --install
If you already have the tools installed you'll get an error saying so, which you can ignore.
Install miniconda, then
conda create --name reacher python=3.8
conda activate reacher
pip install ray arspb
git clone https://github.com/stanfordroboticsclub/reacher-lab.git
cd reacher-lab
pip install -e .
python reacher/reacher_manual_control.py
You should see the PyBullet GUI pop up and see Reacher following the joint positions set by the sliders.
python3 reacher/reacher_manual_control.py --ik
Assuming you have implemented all the functions inside of reacher_kinematics.py
according to their documentation, you can run the above command to enable Cartesian control of the robot. Be slow with the sliders as the leg has a very limited range for where it can be. You will see that sometimes the leg starts jerking because it is unable to find a suitable solution for the given XYZ coordinate.
-
Open vscode and upload your lab 1 / 2 code. The Teensy application should open in the background.
-
Click the open hex file button in the application window (left side) and choose the
firmware.hex
THAT'S IN THIS FOLDER (not from the lab 1/2 code). -
Click the green auto button if it's not already highlighted
-
Press the button on the Teensy to program it with the hex file
Run the python code:
python3 reacher/reacher_manual_control.py --run_on_robot
to do joint control of one leg and
python3 reacher/reacher_manual_control.py --run_on_robot --ik
to do Cartesian control of the leg.