We present a general, two-stage reinforcement learning approach to create robust policies that can be deployed on real robots without any additional training using a single demonstration generated by trajectory optimization. The demonstration is used in the first stage as a starting point to facilitate initial exploration. In the second stage, the relevant task reward is optimized directly and a policy robust to environment uncertainties is computed. We demonstrate and examine in detail the performance and robustness of our approach on highly dynamic hopping and bounding tasks on a quadruped robot.
Hopping task
Variety of behavior from a single demonstration
Robustness to ground configurations not explicitly trained for
Robustness to external perturbations without explicit training for it
Bounding task
Variety of behavior from a single demonstration
Robustness to ground configurations not explicitly trained for
Robustness to external perturbations without explicit training for it