INPROCEEDINGS

Continuous value iteration (CVI) reinforcement learning and imaginary experience replay (IER) for learning multi-goal, continuous action and state space controllers

2019 International Conference on Robotics and Automation ({ICRA}) | may, 2019

Author

Gerken, Andreas and Spranger, Michael

Abstract

This paper presents a novel model-free Reinforcement Learning algorithm for learning behavior in continuous action, state, and goal spaces. The algorithm approximates optimal value functions using non-parametric estimators. It is able to efficiently learn to reach multiple arbitrary goals in deterministic and nondeterministic environments. To improve generalization in the goal space, we propose a novel sample augmentation technique. Using these methods, robots learn faster and overall better controllers. We benchmark the proposed algorithms using simulation and a real-world voltage controlled robot that learns to maneuver in a non-observable Cartesian task space.