`

Timezone: »

 
Poster
Neural Hybrid Automata: Learning Dynamics With Multiple Modes and Stochastic Transitions
Michael Poli · Stefano Massaroli · Luca Scimeca · Sanghyuk Chun · Seong Joon Oh · Atsushi Yamashita · Hajime Asama · Jinkyoo Park · Animesh Garg

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

Effective control and prediction of dynamical systems require appropriate handling of continuous-time and discrete, event-triggered processes. Stochastic hybrid systems (SHSs), common across engineering domains, provide a formalism for dynamical systems subject to discrete, possibly stochastic, state jumps and multi-modal continuous-time flows. Despite the versatility and importance of SHSs across applications, a general procedure for the explicit learning of both discrete events and multi-mode continuous dynamics remains an open problem. This work introduces Neural Hybrid Automata (NHAs), a recipe for learning SHS dynamics without a priori knowledge on the number, mode parameters, and inter-modal transition dynamics. NHAs provide a systematic inference method based on normalizing flows, neural differential equations, and self-supervision. We showcase NHAs on several tasks, including mode recovery and flow learning in systems with stochastic transitions, and end-to-end learning of hierarchical robot controllers.

Author Information

Michael Poli (Stanford University)

My work spans topics in deep learning, dynamical systems, variational inference and numerical methods. I am broadly interested in ensuring the successes achieved by deep learning methods in computer vision and natural language are extended to other engineering domains.

Stefano Massaroli (The University of Tokyo)
Luca Scimeca (Harvard University)
Sanghyuk Chun (NAVER AI Lab)

I'm a research scientist and tech leader at NAVER AI Lab, working on machine learning and its applications. In particular, my research interests focus on bridging the gap between two gigantic topics: reliable machine learning tasks (e.g., robustness [C3, C9, C10, W1, W3], de-biasing or domain generalization [C6, A6], uncertainty estimation [C11, A3], explainability [C5, C11, A2, A4, W2], and fair evaluation [C5, C11]) and learning with limited annotations (e.g., multi-modal learning [C11], weakly-supervised learning [C2, C3, C4, C5, C7, C8, C12, W2, W4, W5, W6, A2, A4], and self-supervised learning). I have contributed large-scale machine learning algorithms [C3, C9, C10, C13] in NAVER AI Lab as well. Prior to working at NAVER, I worked as a research engineer at the advanced recommendation team (ART) in Kakao from 2016 to 2018. I received a master's degree in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST) in 2016. During my master's degree, I researched a scalable algorithm for robust subspace clustering (the algorithm is based on robust PCA and k-means clustering). Before my master's study, I worked at IUM-SOCIUS in 2012 as a software engineering internship. I also did a research internship at Networked and Distributed Computing System Lab in KAIST and NAVER Labs during summer 2013 and fall 2015, respectively.

Seong Joon Oh (NAVER AI Lab)
Atsushi Yamashita (The University of Tokyo)
Hajime Asama (The University of Tokyo)
Jinkyoo Park (KAIST)
Animesh Garg (University of Toronto, Vector Institute)

I am a Assistant Professor of Computer Science at University of Toronto and a Faculty Member at the Vector Institute. I work on machine learning for perception and control in robotics.

More from the Same Authors