In this blog, we try to train a text-conditioned robot control policy. It extends a naive adversarial imitation policy by supporting user text instructions to generate motions with different styles. Specifically, we integrate a BERT language model into an adversarial imitation RL pipeline.
VLA (Vision-Language-Action) models have shown promising results for robots to perform tasks in complex environments based on visual input and textual instructions. Specifically, chunk-based action generation is crucial for its long-horizon planning and short-term per-chunk smooth motion. However, naive chunk-by-chunk action execution strategy may result in jerky robot motion when switching chunks. In this blog, a predictive flow-matching-based action decoder design is proposed to generate smooth action sequences, which forms a key component of our lightweight VLA variant implementation.
In this blog, we will conduct some experiments on Unitree G1. Specifically, using RL + IL and Motion Retargeting from the CMU MoCap dataset, we will attempt to teach Unitree G1 to run, jump, ...
In previous blog posts, we have shown successful applications of CPG-based RL for robot locomotion in both 2D (game) and 3D (world) physical simulation environments. While this approach offers the flexibility to adjust parameters for walking gait on-the-fly, it necessitates heavy reward tuning and prolonged training time. On the other hand, imitation learning from human demonstrations has shown more rapid convergence for natural gait cloning. Here we will try to combine these two techniques and explore the feasibility of using a 2D CPG expert to guide a 3D humanoid robot in learning to walk.