Timezone: »

Safe and Robust Control of Uncertain Systems
Ashwin Balakrishna · Brijen Thananjeyan · Daniel Brown · Marek Petrik · Melanie Zeilinger · Sylvia Herbert

Mon Dec 13 08:00 AM -- 04:00 PM (PST) @ None
Event URL: https://sites.google.com/view/safe-robust-control/home »

Control and decision systems are becoming a ubiquitous part of our daily lives, ranging from serving advertisements or recommendations on the internet to controlling autonomous physical systems such as industrial equipment or robots. While these systems have shown the potential for significantly improving quality of life and industrial efficiency, the impact of the decisions made by these systems can also cause significant damages. For example, an online retailer recommending dangerous products to children, a social media platform serving content which polarizes society, or a household robot/autonomous car which collides with surrounding humans can all cause significant direct harm to society. These undesirable behaviors not only can be dangerous, but also lead to significant inefficiencies when deploying learning-based agents in the real world. This motivates developing algorithms for learning-based control which can reason about uncertainty and constraints in the environment to explicitly avoid undesirable behaviors. We believe hosting a discussion on safety in learning-based control at NeurIPS 2021 would have far-reaching societal impacts by connecting researchers from a variety of disciplines including machine learning, control theory, AI safety, operations research, robotics, and formal methods.

Author Information

Ashwin Balakrishna (UC Berkeley)

I am a second year PhD student in Robotics and Artificial Intelligence at UC Berkeley and am advised by Professor Ken Goldberg of the UC Berkeley AUTOLAB. My research interests are in developing algorithms for imitation and reinforcement learning that are reliable and robust enough to safely deploy on robotic systems. I am currently interested in hybrid algorithms between imitation and reinforcement learning to leverage demonstrations to either guide exploration in RL or perform reward inference. I received my Bachelor’s Degree in Electrical Engineering at Caltech in 2018, and enjoy watching/playing tennis, hiking, and eating interesting foods.

Brijen Thananjeyan (UC Berkeley)
Daniel Brown (UC Berkeley)
Marek Petrik (University of New Hampshire)
Melanie Zeilinger (ETH Zurich)
Sylvia Herbert (University of California, San Diego (UCSD))

More from the Same Authors