Crossing the Reality Gap: the Sim-to-Real Transferability of Robot Controllers in Reinforcement Learning Problems
Date:
I was invited to present a research topic I focused on during my PhD in an integrative lecture of the Machine Learning course held by Prof. Eric Medvet at the Master of Science in Data Science of the University of Trieste.
Content:
The growing demand for robots able to act autonomously in complex scenarios has widely accelerated the introduction of Reinforcement Learning (RL) in robots control applications. However, the trial and error intrinsic nature of RL may result in long training time on real robots and, moreover, it may lead to dangerous outcomes. While simulators are useful tools to accelerate RL training and to ensure safety, they often are provided only with an approximated model of robot dynamics, thus resulting in what is called the reality gap (RG): a mismatch of simulated and real control-law performances caused by the inaccurate representation of the real environment in simulation. The most undesirable result occurs when the controller learned in simulation fails the task on the real robot, thus resulting in an unsuccessful sim-to-real transfer. The goal of the present survey is threefold:
- to identify the main approaches to face the RG problem in the context of robot control with RL,
- pointing out their shortcomings, and
- identify new potential research areas.