Skip to content

apirrone/Open_Duck_Playground

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Open Duck Playground

Installation

Install uv

curl -LsSf https://astral.sh/uv/install.sh | sh

Training

If you want to use the imitation reward, you can generate reference motion with this repo

Then copy polynomial_coefficients.pkl in playground/<robot>/data/

You'll also have to set USE_IMITATION_REWARD=True in it's joystick.py file

Run:

uv run playground/<robot>/runner.py 

Tensorboard

uv run tensorboard --logdir=<yourlogdir>

Inference

Infer mujoco

(for now this is specific to open_duck_mini_v2)

uv run playground/open_duck_mini_v2/mujoco_infer.py -o <path_to_.onnx>

Documentation

Project structure :

.
├── pyproject.toml
├── README.md
├── playground
│   ├── common
│   │   ├── export_onnx.py
│   │   ├── onnx_infer.py
│   │   ├── poly_reference_motion.py
│   │   ├── randomize.py
│   │   ├── rewards.py
│   │   └── runner.py
│   ├── open_duck_mini_v2
│   │   ├── base.py
│   │   ├── data
│   │   │   └── polynomial_coefficients.pkl
│   │   ├── joystick.py
│   │   ├── mujoco_infer.py
│   │   ├── constants.py
│   │   ├── runner.py
│   │   └── xmls
│   │       ├── assets
│   │       ├── open_duck_mini_v2_no_head.xml
│   │       ├── open_duck_mini_v2.xml
│   │       ├── scene_mjx_flat_terrain.xml
│   │       ├── scene_mjx_rough_terrain.xml
│   │       └── scene.xml

Adding a new robot

Create a new directory in playground named after <your robot>. You can copy the open_duck_mini_v2 directory as a starting point.

You will need to:

  • Edit base.py: Mainly renaming stuff to match you robot's name
  • Edit constants.py: specify the names of some important geoms, sensors etc
    • In your mjcf, you'll probably have to add some sites, name some bodies/geoms and add the sensors. Look at how we did it for open_duck_mini_v2
  • Add your mjcf assets in xmls.
  • Edit joystick.py : to choose the rewards you are interested in
    • Note: for now there is still some hard coded values etc. We'll improve things on the way
  • Edit runner.py

Notes

Inspired from https://github.com/kscalelabs/mujoco_playground

Current win

uv run playground/open_duck_mini_v2/runner.py --task flat_terrain_backlash --num_timesteps 300000000

About

Open Duck Project's Mujoco playground RL environments

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages