Workshop: 3rd Robot Learning Workshop

Masha Itkina, Alex Bewley, Roberto Calandra, Igor Gilitschenski, Julien PEREZ, Ransalu Senanayake, Markus Wulfmeier, Vincent Vanhoucke

2020-12-11T07:30:00-08:00 - 2020-12-11T19:30:00-08:00
Abstract: In the proposed workshop, we aim to discuss the challenges and opportunities for machine learning research in the context of physical systems. This discussion involves the presentation of recent methods and the experiences made during the deployment on real-world platforms. Such deployment requires a significant degree of generalization. Namely, the real world is vastly more complex and diverse compared to fixed curated datasets and simulations. Deployed machine learning models must scale to this complexity, be able to adapt to novel situations, and recover from mistakes. Moreover, the workshop aims to strengthen further the ties between the robotics and machine learning communities by discussing how their respective recent directions result in new challenges, requirements, and opportunities for future research.

Following the success of previous robot learning workshops at NeurIPS, the goal of this workshop is to bring together a diverse set of scientists at various stages of their careers and foster interdisciplinary communication and discussion.
In contrast to the previous robot learning workshops which focused on applications in robotics for machine learning, this workshop extends the discussion on how real-world applications within the context of robotics can trigger various impactful directions for the development of machine learning. For a more engaging workshop, we encourage each of our senior presenters to share their presentations with a PhD student or postdoctoral researcher from their lab. Additionally, all our presenters - invited and contributed - are asked to add a ``dirty laundry’’ slide, describing the limitations and shortcomings of their work. We expect this will aid further discussion in poster and panel sessions in addition to helping junior researchers avoid similar roadblocks along their path.

Video

Chat

Chat is not available.

Schedule

2020-12-11T07:30:00-08:00 - 2020-12-11T07:45:00-08:00
Introduction
Masha Itkina
2020-12-11T07:45:00-08:00 - 2020-12-11T08:30:00-08:00
Invited Talk - "Walking the Boundary of Learning and Interaction"
Dorsa Sadigh, Erdem Biyik
There have been significant advances in the field of robot learning in the past decade. However, many challenges still remain when considering how robot learning can advance interactive agents such as robots that collaborate with humans. This includes autonomous vehicles that interact with human-driven vehicles or pedestrians, service robots collaborating with their users at homes over short or long periods of time, or assistive robots helping patients with disabilities. This introduces an opportunity for developing new robot learning algorithms that can help advance interactive autonomy. In this talk, we will discuss a formalism for human-robot interaction built upon ideas from representation learning. Specifically, we will first discuss the notion of latent strategies — low dimensional representations sufficient for capturing non-stationary interactions. We will then talk about the challenges of learning such representations when interacting with humans, and how we can develop data-efficient techniques that enable actively learning computational models of human behavior from demonstrations and preferences.
2020-12-11T08:30:00-08:00 - 2020-12-11T08:31:00-08:00
Introduction to Contributed Talk
2020-12-11T08:31:00-08:00 - 2020-12-11T08:45:00-08:00
Contributed Talk 1 - "Accelerating Reinforcement Learning with Learned Skill Priors" (Best Paper Runner-Up)
Karl Pertsch
Intelligent agents rely heavily on prior experience when learning a new task, yet most modern reinforcement learning (RL) approaches learn every task from scratch. One approach for leveraging prior knowledge is to transfer skills learned on prior tasks to the new task. However, as the amount of prior experience increases, the number of transferable skills grows too, making it challenging to explore the full set of available skills during downstream learning. Yet, intuitively, not all skills should be explored with equal probability; for example information about the current state can hint which skills are promising to explore. In this work, we propose to implement this intuition by learning a prior over skills. We propose a deep latent variable model that jointly learns an embedding space of skills and the skill prior from offline agent experience. We then extend common maximum-entropy RL approaches to use skill priors to guide downstream learning. We validate our approach, SPiRL (Skill-Prior RL), on complex navigation and robotic manipulation tasks and show that learned skill priors are essential for effective skill transfer from rich datasets. Videos and code are available at https://clvrai.com/spirl.
2020-12-11T08:45:00-08:00 - 2020-12-11T09:45:00-08:00
Poster Session 1
2020-12-11T09:45:00-08:00 - 2020-12-11T09:46:00-08:00
Introduction to Invited Talk
2020-12-11T09:46:00-08:00 - 2020-12-11T10:30:00-08:00
Invited Talk - "Object- and Action-Centric Representational Robot Learning"
Pete Florence, Daniel Seita
In this talk we'll discuss different views on representations for robot learning, in particular towards the goal of precise, generalizable vision-based manipulation skills that are sample-efficient and scalable to train. Object-centric representations, on the one hand, can enable using rich additional sources of learning, and can enable various efficient downstream behaviors. Action-centric representations, on the other hand, can learn high-level planning, and do not have to explicitly instantiate objectness. As case studies we’ll look at two recent papers in these two areas.
2020-12-11T10:30:00-08:00 - 2020-12-11T10:31:00-08:00
Introduction to Invited Talk
2020-12-11T10:31:00-08:00 - 2020-12-11T11:15:00-08:00
Invited Talk - "State of Robotics @ Google"
Carolina Parada
Robotics@Google’s mission is to make robots useful in the real world through machine learning. We are excited about a new model for robotics, designed for generalization across diverse environments and instructions. This model is focused on scalable data-driven learning, which is task-agnostic, leverages simulation, learns from past experience, and can be quickly adapted to work in the real-world through limited interactions. In this talk, we’ll share some of our recent work in this direction in both manipulation and locomotion applications.
2020-12-11T11:15:00-08:00 - 2020-12-11T15:00:00-08:00
Break
2020-12-11T15:00:00-08:00 - 2020-12-11T16:00:00-08:00
Discussion Panel
Pete Florence, Dorsa Sadigh, Carolina Parada, Christin Jeannette Bohg, Roberto Calandra, Peter Stone, Fabio Ramos
2020-12-11T16:00:00-08:00 - 2020-12-11T16:01:00-08:00
Introduction to Invited Talk
2020-12-11T16:01:00-08:00 - 2020-12-11T16:45:00-08:00
Invited Talk - "Learning-based control of a legged robot"
Jemin Hwangbo, JooWoong Byun
Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. Recent algorithmic improvements have made simulation even cheaper and more accurate at the same time. Leveraging such tools to obtain control policies is thus a seemingly promising direction. However, a few simulation-related issues have to be addressed before utilizing them in practice. The biggest obstacle is the so-called reality gap -- discrepancies between the simulated and the real system. Hand-crafted models often fail to achieve a reasonable accuracy due to the complexities of actuation systems of existing robots. This talk will focus on how such obstacles can be overcome. The main approaches are twofold: a fast and accurate algorithm for solving contact dynamics and a data-driven simulation-augmentation method using deep learning. These methods are applied to the ANYmal robot, a sophisticated medium-dog-sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than ever before, and recovering from falling even in complex configurations.
2020-12-11T16:45:00-08:00 - 2020-12-11T16:46:00-08:00
Introduction to Contributed Talk
2020-12-11T16:46:00-08:00 - 2020-12-11T17:00:00-08:00
Contributed Talk 2 - "Multi-Robot Deep Reinforcement Learning via Hierarchically Integrated Models" (Best Paper)
Katie Kang
Deep reinforcement learning algorithms require large and diverse datasets in order to learn successful perception-based control policies. However, gathering such datasets with a single robot can be prohibitively expensive. In contrast, collecting data with multiple platforms with possibly different dynamics is a more scalable approach to large-scale data collection. But how can deep reinforcement learning algorithms leverage these dynamically heterogeneous datasets? In this work, we propose a deep reinforcement learning algorithm with hierarchically integrated models (HInt). At training time, HInt learns separate perception and dynamics models, and at test time, HInt integrates the two models in a hierarchical manner and plans actions with the integrated model. This method of planning with hierarchically integrated models allows the algorithm to train on datasets gathered by a variety of different platforms, while respecting the physical capabilities of the deployment robot at test time. Our simulated and real world navigation experiments show that HInt outperforms conventional hierarchical policies and single-source approaches.
2020-12-11T17:00:00-08:00 - 2020-12-11T17:30:00-08:00
Break
2020-12-11T17:30:00-08:00 - 2020-12-11T17:31:00-08:00
Introduction to Invited Talk
2020-12-11T17:31:00-08:00 - 2020-12-11T18:15:00-08:00
Invited Talk - "RL with Sim2Real in the loop / Online Domain Adaptation for Mapping"
Fabio Ramos, Anthony Tompkins
We will have two talks describing recent developments by the group. First, we will present a Bayesian solution to the problem of estimating posterior distributions of simulation parameters given real data. The uncertainty captured in the posterior can significantly improve the performance of reinforcement learning algorithms trained in simulation but deployed in the real world. We will also show that combining posterior parameter estimation and policy updates sequentially leads to further improvements on the convergence rate. In the second part, we will address the problem of mapping as an online classification problem. We will show that optimal transport can be a valuable theoretical framework to enable fast transformation of geometric information obtained in an environment or simulated environment into a secondary domain, leveraging prior information in an elegant and efficient manner.
2020-12-11T18:15:00-08:00 - 2020-12-11T19:15:00-08:00
Poster Session 2
2020-12-11T19:15:00-08:00 - 2020-12-11T19:30:00-08:00
Closing