Timezone: »
In the proposed workshop, we aim to discuss the challenges and opportunities for machine learning research in the context of physical systems. This discussion involves the presentation of recent methods and the experiences made during the deployment on real-world platforms. Such deployment requires a significant degree of generalization. Namely, the real world is vastly more complex and diverse compared to fixed curated datasets and simulations. Deployed machine learning models must scale to this complexity, be able to adapt to novel situations, and recover from mistakes. Moreover, the workshop aims to strengthen further the ties between the robotics and machine learning communities by discussing how their respective recent directions result in new challenges, requirements, and opportunities for future research.
Following the success of previous robot learning workshops at NeurIPS, the goal of this workshop is to bring together a diverse set of scientists at various stages of their careers and foster interdisciplinary communication and discussion.
In contrast to the previous robot learning workshops which focused on applications in robotics for machine learning, this workshop extends the discussion on how real-world applications within the context of robotics can trigger various impactful directions for the development of machine learning. For a more engaging workshop, we encourage each of our senior presenters to share their presentations with a PhD student or postdoctoral researcher from their lab. Additionally, all our presenters - invited and contributed - are asked to add a ``dirty laundry’’ slide, describing the limitations and shortcomings of their work. We expect this will aid further discussion in poster and panel sessions in addition to helping junior researchers avoid similar roadblocks along their path.
Fri 7:30 a.m. - 7:45 a.m.
|
Introduction
|
Masha Itkina 🔗 |
Fri 7:45 a.m. - 8:30 a.m.
|
Invited Talk - "Walking the Boundary of Learning and Interaction"
(
Invited Talk
)
SlidesLive Video » There have been significant advances in the field of robot learning in the past decade. However, many challenges still remain when considering how robot learning can advance interactive agents such as robots that collaborate with humans. This includes autonomous vehicles that interact with human-driven vehicles or pedestrians, service robots collaborating with their users at homes over short or long periods of time, or assistive robots helping patients with disabilities. This introduces an opportunity for developing new robot learning algorithms that can help advance interactive autonomy. In this talk, we will discuss a formalism for human-robot interaction built upon ideas from representation learning. Specifically, we will first discuss the notion of latent strategies — low dimensional representations sufficient for capturing non-stationary interactions. We will then talk about the challenges of learning such representations when interacting with humans, and how we can develop data-efficient techniques that enable actively learning computational models of human behavior from demonstrations and preferences. |
Dorsa Sadigh · Erdem Biyik 🔗 |
Fri 8:31 a.m. - 8:45 a.m.
|
Contributed Talk 1 - "Accelerating Reinforcement Learning with Learned Skill Priors" (Best Paper Runner-Up)
(
Contributed Talk
)
SlidesLive Video » Intelligent agents rely heavily on prior experience when learning a new task, yet most modern reinforcement learning (RL) approaches learn every task from scratch. One approach for leveraging prior knowledge is to transfer skills learned on prior tasks to the new task. However, as the amount of prior experience increases, the number of transferable skills grows too, making it challenging to explore the full set of available skills during downstream learning. Yet, intuitively, not all skills should be explored with equal probability; for example information about the current state can hint which skills are promising to explore. In this work, we propose to implement this intuition by learning a prior over skills. We propose a deep latent variable model that jointly learns an embedding space of skills and the skill prior from offline agent experience. We then extend common maximum-entropy RL approaches to use skill priors to guide downstream learning. We validate our approach, SPiRL (Skill-Prior RL), on complex navigation and robotic manipulation tasks and show that learned skill priors are essential for effective skill transfer from rich datasets. Videos and code are available at https://clvrai.com/spirl. |
Karl Pertsch 🔗 |
Fri 8:45 a.m. - 9:45 a.m.
|
Poster Session 1 ( Poster Session ) link » | 🔗 |
Fri 9:46 a.m. - 10:30 a.m.
|
Invited Talk - "Object- and Action-Centric Representational Robot Learning"
(
Invited Talk
)
SlidesLive Video » In this talk we'll discuss different views on representations for robot learning, in particular towards the goal of precise, generalizable vision-based manipulation skills that are sample-efficient and scalable to train. Object-centric representations, on the one hand, can enable using rich additional sources of learning, and can enable various efficient downstream behaviors. Action-centric representations, on the other hand, can learn high-level planning, and do not have to explicitly instantiate objectness. As case studies we’ll look at two recent papers in these two areas. |
Pete Florence · Daniel Seita 🔗 |
Fri 10:31 a.m. - 11:15 a.m.
|
Invited Talk - "State of Robotics @ Google"
(
Invited Talk
)
SlidesLive Video » Robotics@Google’s mission is to make robots useful in the real world through machine learning. We are excited about a new model for robotics, designed for generalization across diverse environments and instructions. This model is focused on scalable data-driven learning, which is task-agnostic, leverages simulation, learns from past experience, and can be quickly adapted to work in the real-world through limited interactions. In this talk, we’ll share some of our recent work in this direction in both manipulation and locomotion applications. |
Carolina Parada 🔗 |
Fri 11:15 a.m. - 3:00 p.m.
|
Break
|
🔗 |
Fri 3:00 p.m. - 4:00 p.m.
|
Discussion Panel
|
Pete Florence · Dorsa Sadigh · Carolina Parada · Jeannette Bohg · Roberto Calandra · Peter Stone · Fabio Ramos 🔗 |
Fri 4:01 p.m. - 4:45 p.m.
|
Invited Talk - "Learning-based Control of a Legged Robot"
(
Invited Talk
)
Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. Recent algorithmic improvements have made simulation even cheaper and more accurate at the same time. Leveraging such tools to obtain control policies is thus a seemingly promising direction. However, a few simulation-related issues have to be addressed before utilizing them in practice. The biggest obstacle is the so-called reality gap -- discrepancies between the simulated and the real system. Hand-crafted models often fail to achieve a reasonable accuracy due to the complexities of actuation systems of existing robots. This talk will focus on how such obstacles can be overcome. The main approaches are twofold: a fast and accurate algorithm for solving contact dynamics and a data-driven simulation-augmentation method using deep learning. These methods are applied to the ANYmal robot, a sophisticated medium-dog-sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than ever before, and recovering from falling even in complex configurations. |
Jemin Hwangbo · JooWoong Byun 🔗 |
Fri 4:46 p.m. - 5:00 p.m.
|
Contributed Talk 2 - "Multi-Robot Deep Reinforcement Learning via Hierarchically Integrated Models" (Best Paper)
(
Contributed Talk
)
SlidesLive Video » Deep reinforcement learning algorithms require large and diverse datasets in order to learn successful perception-based control policies. However, gathering such datasets with a single robot can be prohibitively expensive. In contrast, collecting data with multiple platforms with possibly different dynamics is a more scalable approach to large-scale data collection. But how can deep reinforcement learning algorithms leverage these dynamically heterogeneous datasets? In this work, we propose a deep reinforcement learning algorithm with hierarchically integrated models (HInt). At training time, HInt learns separate perception and dynamics models, and at test time, HInt integrates the two models in a hierarchical manner and plans actions with the integrated model. This method of planning with hierarchically integrated models allows the algorithm to train on datasets gathered by a variety of different platforms, while respecting the physical capabilities of the deployment robot at test time. Our simulated and real world navigation experiments show that HInt outperforms conventional hierarchical policies and single-source approaches. |
Yijun Kang 🔗 |
Fri 5:00 p.m. - 5:30 p.m.
|
Break
|
🔗 |
Fri 6:15 p.m. - 7:15 p.m.
|
Poster Session 2 ( Poster Session ) link » | 🔗 |
Fri 7:15 p.m. - 7:30 p.m.
|
Closing
|
🔗 |
Author Information
Masha Itkina (Stanford University)
Alex Bewley (Google)
Roberto Calandra (Facebook AI Research)
Igor Gilitschenski (MIT)
Julien PEREZ (NAVER LABS Europe)
Ransalu Senanayake (Stanford University)
Markus Wulfmeier (DeepMind)
Vincent Vanhoucke (Google)
More from the Same Authors
-
2021 : Occlusion-Aware Crowd Navigation Using People as Sensors »
Ye-Ji Mun · Masha Itkina · Katherine Driggs-Campbell -
2021 : Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration »
Oliver Groth · Markus Wulfmeier · Giulia Vezzani · Vibhavari Dasagi · Tim Hertweck · Roland Hafner · Nicolas Heess · Martin Riedmiller -
2021 : Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks »
Ryan Sander · Wilko Schwarting · Tim Seyde · Igor Gilitschenski · Sertac Karaman · Daniela Rus -
2021 : Strength Through Diversity: Robust Behavior Learning via Mixture Policies »
Tim Seyde · Wilko Schwarting · Igor Gilitschenski · Markus Wulfmeier · Daniela Rus -
2022 : Safety-Guaranteed Skill Discovery for Robot Manipulation Tasks »
Sunin Kim · Jaewoon Kwon · Taeyoon Lee · Younghyo Park · Julien PEREZ -
2022 : Fifteen-minute Competition Overview Video »
Nico Gürtler · Georg Martius · Pavel Kolev · Sebastian Blaes · Manuel Wuethrich · Markus Wulfmeier · Cansu Sancaktar · Martin Riedmiller · Arthur Allshire · Bernhard Schölkopf · Annika Buchholz · Stefan Bauer -
2022 : Safety-Guaranteed Skill Discovery for Robot Manipulation Tasks »
Sunin Kim · Jaewoon Kwon · Taeyoon Lee · Younghyo Park · Julien PEREZ -
2023 Workshop: 6th Robot Learning Workshop: Pretraining, Fine-Tuning, and Generalization with Large Scale Models »
Dhruv Shah · Paula Wulkop · Claas Voelcker · Georgia Chalvatzaki · Alex Bewley · Hamidreza Kasaei · Ransalu Senanayake · Julien PEREZ · Jonathan Tompson -
2023 Workshop: Touch Processing: a new Sensing Modality for AI »
Roberto Calandra · Haozhi Qi · Mike Lambeta · Perla Maiolino · Yasemin Bekiroglu · Jitendra Malik -
2022 Workshop: 5th Robot Learning Workshop: Trustworthy Robotics »
Alex Bewley · Roberto Calandra · Anca Dragan · Igor Gilitschenski · Emily Hannigan · Masha Itkina · Hamidreza Kasaei · Jens Kober · Danica Kragic · Nathan Lambert · Julien PEREZ · Fabio Ramos · Ransalu Senanayake · Jonathan Tompson · Vincent Vanhoucke · Markus Wulfmeier -
2022 Competition: Real Robot Challenge III - Learning Dexterous Manipulation from Offline Data in the Real World »
Nico Gürtler · Georg Martius · Sebastian Blaes · Pavel Kolev · Cansu Sancaktar · Stefan Bauer · Manuel Wuethrich · Markus Wulfmeier · Martin Riedmiller · Arthur Allshire · Annika Buchholz · Bernhard Schölkopf -
2021 Workshop: 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning »
Alex Bewley · Masha Itkina · Hamidreza Kasaei · Jens Kober · Nathan Lambert · Julien PEREZ · Ransalu Senanayake · Vincent Vanhoucke · Markus Wulfmeier · Igor Gilitschenski -
2021 Poster: Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models »
Phil Chen · Masha Itkina · Ransalu Senanayake · Mykel J Kochenderfer -
2021 Poster: Active 3D Shape Reconstruction from Vision and Touch »
Edward Smith · David Meger · Luis Pineda · Roberto Calandra · Jitendra Malik · Adriana Romero Soriano · Michal Drozdzal -
2021 Poster: Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies »
Tim Seyde · Igor Gilitschenski · Wilko Schwarting · Bartolomeo Stellato · Martin Riedmiller · Markus Wulfmeier · Daniela Rus -
2020 : Discussion Panel »
Pete Florence · Dorsa Sadigh · Carolina Parada · Jeannette Bohg · Roberto Calandra · Peter Stone · Fabio Ramos -
2020 : Introduction »
Masha Itkina -
2020 Workshop: Meta-Learning »
Jane Wang · Joaquin Vanschoren · Erin Grant · Jonathan Richard Schwarz · Francesco Visin · Jeff Clune · Roberto Calandra -
2020 Poster: Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization »
Ben Letham · Roberto Calandra · Akshara Rai · Eytan Bakshy -
2020 Poster: Evidential Sparsification of Multimodal Latent Spaces in Conditional Variational Autoencoders »
Masha Itkina · Boris Ivanovic · Ransalu Senanayake · Mykel J Kochenderfer · Marco Pavone -
2020 Poster: 3D Shape Reconstruction from Vision and Touch »
Edward Smith · Roberto Calandra · Adriana Romero · Georgia Gkioxari · David Meger · Jitendra Malik · Michal Drozdzal -
2019 : Towards Robust Interactive Autonomy »
Igor Gilitschenski -
2019 Workshop: Robot Learning: Control and Interaction in the Real World »
Roberto Calandra · Markus Wulfmeier · Kate Rakelly · Sanket Kamthe · Danica Kragic · Stefan Schaal · Markus Wulfmeier -
2019 : Poster session »
Sebastian Farquhar · Erik Daxberger · Andreas Look · Matt Benatan · Ruiyi Zhang · Marton Havasi · Fredrik Gustafsson · James A Brofos · Nabeel Seedat · Micha Livne · Ivan Ustyuzhaninov · Adam Cobb · Felix D McGregor · Patrick McClure · Tim R. Davidson · Gaurush Hiranandani · Sanjeev Arora · Masha Itkina · Didrik Nielsen · William Harvey · Matias Valdenegro-Toro · Stefano Peluchetti · Riccardo Moriconi · Tianyu Cui · Vaclav Smidl · Taylan Cemgil · Jack Fitzsimons · He Zhao · · mariana vargas vieyra · Apratim Bhattacharyya · Rahul Sharma · Geoffroy Dubourg-Felonneau · Jonathan Warrell · Slava Voloshynovskiy · Mihaela Rosca · Jiaming Song · Andrew Ross · Homa Fashandi · Ruiqi Gao · Hooshmand Shokri Razaghi · Joshua Chang · Zhenzhong Xiao · Vanessa Boehm · Giorgio Giannone · Ranganath Krishnan · Joe Davison · Arsenii Ashukha · Jeremiah Liu · Sicong (Sheldon) Huang · Evgenii Nikishin · Sunho Park · Nilesh Ahuja · Mahesh Subedar · · Artyom Gadetsky · Jhosimar Arias Figueroa · Tim G. J. Rudner · Waseem Aslam · Adrián Csiszárik · John Moberg · Ali Hebbal · Kathrin Grosse · Pekka Marttinen · Bang An · Hlynur Jónsson · Samuel Kessler · Abhishek Kumar · Mikhail Figurnov · Omesh Tickoo · Steindor Saemundsson · Ari Heljakka · Dániel Varga · Niklas Heim · Simone Rossi · Max Laves · Waseem Gharbieh · Nicholas Roberts · Luis Armando Pérez Rey · Matthew Willetts · Prithvijit Chakrabarty · Sumedh Ghaisas · Carl Shneider · Wray Buntine · Kamil Adamczewski · Xavier Gitiaux · Suwen Lin · Hao Fu · Gunnar Rätsch · Aidan Gomez · Erik Bodin · Dinh Phung · Lennart Svensson · Juliano Tusi Amaral Laganá Pinto · Milad Alizadeh · Jianzhun Du · Kevin Murphy · Beatrix Benkő · Shashaank Vattikuti · Jonathan Gordon · Christopher Kanan · Sontje Ihler · Darin Graham · Michael Teng · Louis Kirsch · Tomas Pevny · Taras Holotyak -
2019 Workshop: Meta-Learning »
Roberto Calandra · Ignasi Clavera Gilaberte · Frank Hutter · Joaquin Vanschoren · Jane Wang -
2018 : Coffee Break and Poster Session I »
Pim de Haan · Bin Wang · Dequan Wang · Aadil Hayat · Ibrahim Sobh · Muhammad Asif Rana · Thibault Buhet · Nicholas Rhinehart · Arjun Sharma · Alex Bewley · Michael Kelly · Lionel Blondé · Ozgur S. Oguz · Vaibhav Viswanathan · Jeroen Vanbaar · Konrad Żołna · Negar Rostamzadeh · Rowan McAllister · Sanjay Thakur · Alexandros Kalousis · Chelsea Sidrane · Sujoy Paul · Daphne Chen · Michal Garmulewicz · Henryk Michalewski · Coline Devin · Hongyu Ren · Jiaming Song · Wen Sun · Hanzhang Hu · Wulong Liu · Emilie Wirbel -
2018 Workshop: Modeling and decision-making in the spatiotemporal domain »
Ransalu Senanayake · Neal Jean · Fabio Ramos · Girish Chowdhary -
2018 Poster: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Spotlight: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2017 : 6 Spotlight Talks (3 min each) »
Mennatullah Siam · Mohit Prabhushankar · Priyam Parashar · Mustafa Mukadam · hengshuai yao · Ransalu Senanayake -
2017 : Introduction and opening remarks »
Roberto Calandra -
2017 Workshop: Workshop on Meta-Learning »
Roberto Calandra · Frank Hutter · Hugo Larochelle · Sergey Levine -
2017 Poster: Hierarchical Attentive Recurrent Tracking »
Adam Kosiorek · Alex Bewley · Ingmar Posner -
2016 Workshop: Bayesian Optimization: Black-box Optimization and Beyond »
Roberto Calandra · Bobak Shahriari · Javier Gonzalez · Frank Hutter · Ryan Adams -
2016 Poster: Spatio-Temporal Hilbert Maps for Continuous Occupancy Representation in Dynamic Environments »
Ransalu Senanayake · Lionel Ott · Simon O'Callaghan · Fabio Ramos -
2015 Workshop: Bayesian Optimization: Scalability and Flexibility »
Bobak Shahriari · Ryan Adams · Nando de Freitas · Amar Shah · Roberto Calandra