unitreerobotics / unitree_rl_gym
- четверг, 19 декабря 2024 г. в 00:00:03
This is a simple example of using Unitree Robots for reinforcement learning, including Unitree Go2, H1, H1_2, G1
| Isaac Gym | Mujoco | Physical |
|---|---|---|
Create a new python virtual env with python 3.8
Install pytorch 2.3.1 with cuda-12.1:
pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121Install Isaac Gym
cd isaacgym/python && pip install -e .cd examples && python 1080_balls_of_solitude.pyInstall rsl_rl (PPO implementation)
cd rsl_rl && git checkout v1.0.2 && pip install -e .Install unitree_rl_gym
unitree_rl_gympip install -e .Install unitree_sdk2py (Optional for depoly on real robot)
cd unitree_sdk2_python & pip install -e .Train:
python legged_gym/scripts/train.py --task=go2
--sim_device=cpu, --rl_device=cpu (sim on CPU and rl on GPU is possible).--headless.v to stop the rendering. You can then enable it later to check the progress.logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt. Where <experiment_name> and <run_name> are defined in the train config.Play:python legged_gym/scripts/play.py --task=go2
load_run and checkpoint in the train config.| Go2 | G1 | H1 | H1_2 |
|---|---|---|---|
To execute sim2sim in mujoco, execute the following command:
python deploy/deploy_mujoco/deploy_mujoco.py {config_name}config_name: The file name of the configuration file. The configuration file will be found under deploy/deploy_mujoco/configs/, for example g1.yaml, h1.yaml, h1_2.yaml.
example:
python deploy/deploy_mujoco/deploy_mujoco.py g1.yaml| G1 | H1 | H1_2 |
|---|---|---|
reference to Deploy on Physical Robot(English) | 实物部署(简体中文)