Fri Dec 08 08:00 AM -- 06:30 PM (PST) @ 104 B
Acting and Interacting in the Real World: Challenges in Robot Learning
In recent years robotics has made significant strides towards applications of real value to the public domain. Robots are now increasingly expected to work for and alongside us in complex, dynamic environments. Machine learning has been a key enabler of this success, particularly in the realm of robot perception where, due to substantial overlap with the machine vision community, methods and training data can be readily leveraged.
Recent advances in reinforcement learning and learning from demonstration — geared towards teaching agents how to act — provide a tantalising glimpse at a promising future trajectory for robot learning. Mastery of challenges such as the Atari suite and AlphaGo build significant excitement as to what our robots may be able to do for us in the future. However, this success relies on the ability of learning cheaply, often within the confines of a virtual environment, by trial and error over as many episodes as required. This presents a significant challenge for embodied systems acting and interacting in the real world. Not only is there a cost (either monetary or in terms of execution time) associated with a particular trial, thus limiting the amount of training data obtainable, but there also exist safety constraints which make an exploration of the state space simply unrealistic: teaching a real robot to cross a real road via reinforcement learning for now seems a noble yet somewhat far fetched goal. A significant gulf therefore exists between prior art on teaching agents to act and effective approaches to real-world robot learning. This, we posit, is one of the principal impediments at the moment in advancing real-world robotics science.
In order to bridge this gap researchers and practitioners in robot learning have to address a number of key challenges to allow real-world systems to be trained in a safe and data-efficient manner. This workshop aims to bring together experts in reinforcement learning, learning from demonstration, deep learning, field robotics and beyond to discuss what the principal challenges are and how they might be addressed. With a particular emphasis on data efficient learning, of particular interest will be contributions in representation learning, curriculum learning, task transfer, one-shot learning, domain transfer (in particular from simulation to real-world tasks), reinforcement learning for real world applications, learning from demonstration for real world applications, knowledge learning from observation and interaction, active concept acquisition and learning causal models.