`

( events)   Timezone: »  
Workshop
Tue Dec 14 07:00 AM -- 07:00 PM (PST)
4th Robot Learning Workshop: Self-Supervised and Lifelong Learning
Alex Bewley · Mikhal Itkina · Hamidreza Kasaei · Jens Kober · Nathan Lambert · Julien PEREZ · Ransalu Senanayake · Vincent Vanhoucke · Markus Wulfmeier · Igor Gilitschenski





Workshop Home Page

Applying machine learning to real-world systems such as robots has been an important part of the NeurIPS community in past years. Progress in machine learning has enabled robots to demonstrate strong performance in helping humans in some household and care-taking tasks, manufacturing, logistics, transportation, and many other unstructured and human-centric environments. While these results are promising, access to high-quality, task-relevant data remains one of the largest bottlenecks for successful deployment of such technologies in the real world.

Methods to generate, re-use, and integrate more sources of valuable data, such as lifelong learning, transfer, and continuous improvement could unlock the next steps of performance. However, accessing these data sources comes with fundamental challenges, which include safety, stability, and the daunting issue of providing supervision for learning while the robot is in operation. Today, unique new opportunities are presenting themselves in this quest for robust, continuous learning: large-scale, self-supervised and multimodal approaches to learning are showing and often exceeding state-of-the-art supervised learning approaches; reinforcement and imitation learning are becoming more stable and data-efficient in real-world settings; new approaches combining strong, principled safety and stability guarantees with the expressive power of machine learning are emerging.

This workshop aims to discuss how these emerging trends in machine learning of self-supervision and lifelong learning can be best utilized in real-world robotic systems. We bring together experts with diverse perspectives on this topic to highlight the ways current successes in the field are changing the conversation around lifelong learning, and how this will affect the future of robotics, machine learning, and our ability to deploy intelligent, self-improving agents to enhance people's lives.

More information can be found on the website: http://www.robot-learning.ml/2021/.

Opening Remarks (Introduction)
Continual Learning of Semantic Segmentation using Complementary 2D-3D Data Representations (Contributed Talk 1: Best Paper Runner-Up)
Learning from and Interacting with Humans (Q&A 1) (Panel)
Coffee Break (Break)
Poster Session 1 (Poster Session)
Domains and Applications (Q&A 2) (Panel)
Long Break (Break)
Self- and Unsupervised Learning (Debate) (Panel)
Break
Lifelong Robotic Reinforcement Learning by Retaining Experiences (Contributed Talk 2: Best Paper)
Poster Session 2 (Poster Session)
End2End or Modular Systems (Q&A 3) (Panel)
Concluding Remarks (Wrap Up)
Lifelong Robotic Reinforcement Learning by Retaining Experiences (Poster)
Learning Design and Construction with Varying-Sized Materials via Prioritized Memory Resets (Poster)
IL-flOw: Imitation Learning from Observation using Normalizing Flows (Poster)
Continual Learning of Semantic Segmentation using Complementary 2D-3D Data Representations (Poster)
Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration (Poster)
Versatile Inverse Reinforcement Learning via Cumulative Rewards (Poster)
Simultaneous Human Action and Motion Prediction (Poster)
Demonstration-Guided Q-Learning (Poster)
panda-gym : Open-source goal-conditioned enviroments for robotic learning (Poster)
Sample-Efficient Policy Search with a Trajectory Autoencoder (Poster)
Open-Access Physical Robotics Environment for Real-World Reinforcement Learning Benchmark and Research (Poster)
Hybrid Imitative Planning with Geometric and Predictive Costs in Offroad Environments (Poster)
Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets (Poster)
Object Representations Guided By Optical Flow (Poster)
What Would the Expert do()?: Causal Imitation Learning (Poster)
Maximum Likelihood Constraint Inference on Continuous State Spaces (Poster)
Assistive Tele-op: Leveraging Transformers to Collect Robotic Task Demonstrations (Poster)
Using Dense Object Descriptors for Picking Cluttered General Objects with Reinforcement Learning (Poster)
Variational Inference MPC for Robot Motion with Normalizing Flows (Poster)
Task-Independent Causal State Abstraction (Poster)
Guiding Evolutionary Strategies by Differentiable Robot Simulators (Poster)
Visual Affordance-guided Policy Optimization (Poster)
ADHERENT: Learning Human-like Trajectory Generators for Whole-body Control of Humanoid Robots (Poster)
Solving Occlusion in Terrain Mapping using Neural Networks (Poster)