In this blog, we will conduct some experiments on Unitree G1. Specifically, using RL + IL and Motion Retargeting from the CMU MoCap dataset, we will attempt to teach Unitree G1 to run, jump, ...
In previous blog posts, we have shown successful applications of CPG-based RL for robot locomotion in both 2D (game) and 3D (world) physical simulation environments. While this approach offers the flexibility to adjust parameters for walking gait on-the-fly, it necessitates heavy reward tuning and prolonged training time. On the other hand, imitation learning from human demonstrations has shown more rapid convergence for natural gait cloning. Here we will try to combine these two techniques and explore the feasibility of using a 2D CPG expert to guide a 3D humanoid robot in learning to walk.
In this blog, we will attempt to adapt CPG + RL, which was successfully applied to the 2D Gym BipedalWalker, to a real 3D bipedal humanoid model: the Unitree H1-2.
In the previous blog, we provided a brief introduction to central pattern generators (CPGs) and how they can be used as quadrupedal locomotion controllers. CPG-based methods offer advantages such as parameterizable gait behavior generation, dynamic motion pattern adjustment on the fly, etc. Therefore it is interesting to see how this method performs when applied to bipedal robots.