Timezone: »
Supervised learning in function spaces is an emerging area of machine learning research with applications to the prediction of complex physical systems such as fluid flows, solid mechanics, and climate modeling. By directly learning maps (operators) between infinite dimensional function spaces, these models are able to learn discretization invariant representations of target functions. A common approach is to represent such target functions as linear combinations of basis elements learned from data. However, there are simple scenarios where, even though the target functions form a low dimensional submanifold, a very large number of basis elements is needed for an accurate linear representation. Here we present NOMAD, a novel operator learning framework with a nonlinear decoder map capable of learning finite dimensional representations of nonlinear submanifolds in function spaces. We show this method is able to accurately learn low dimensional representations of solution manifolds to partial differential equations while outperforming linear models of larger size. Additionally, we compare to state-of-the-art operator learning methods on a complex fluid dynamics benchmark and achieve competitive performance with a significantly smaller model size and training cost.
Author Information
Jacob Seidman (University of Pennsylvania)
Georgios Kissas (University of Pennsylvania)
Paris Perdikaris (University of Pennsylvania)
George J. Pappas (University of Pennsylvania)
George J. Pappas is the UPS Foundation Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is member of the GRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, the O. Hugo Schuck Best Paper Award, the National Science Foundation PECASE, and the George H. Heilmeier Faculty Excellence Award.
More from the Same Authors
-
2022 : On the impact of larger batch size in the training of Physics Informed Neural Networks »
Shyam Sankaran · Hanwen Wang · Leonardo Ferreira Guilhoto · Paris Perdikaris -
2022 Spotlight: Learning Operators with Coupled Attention »
Georgios Kissas · Jacob Seidman · Leonardo Ferreira Guilhoto · Victor M. Preciado · George J. Pappas · Paris Perdikaris -
2022 Poster: Learning Operators with Coupled Attention »
Georgios Kissas · Jacob Seidman · Leonardo Ferreira Guilhoto · Victor M. Preciado · George J. Pappas · Paris Perdikaris -
2022 Poster: Probable Domain Generalization via Quantile Risk Minimization »
Cian Eastwood · Alexander Robey · Shashank Singh · Julius von Kügelgen · Hamed Hassani · George J. Pappas · Bernhard Schölkopf -
2022 Poster: Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds »
Aritra Mitra · Arman Adibi · George J. Pappas · Hamed Hassani -
2021 : Enhancing the trainability and expressivity of deep MLPs with globally orthogonal initialization »
Hanwen Wang · Paris Perdikaris -
2021 Poster: Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients »
Aritra Mitra · Rayana Jaafar · George J. Pappas · Hamed Hassani -
2021 Poster: Model-Based Domain Generalization »
Alexander Robey · George J. Pappas · Hamed Hassani -
2021 Poster: Adversarial Robustness with Semi-Infinite Constrained Learning »
Alexander Robey · Luiz Chamon · George J. Pappas · Hamed Hassani · Alejandro Ribeiro -
2021 Poster: Safe Pontryagin Differentiable Programming »
Wanxin Jin · Shaoshuai Mou · George J. Pappas -
2019 Poster: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2019 Spotlight: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2018 : Poster Session 1 »
Stefan Gadatsch · Danil Kuzin · Navneet Kumar · Patrick Dallaire · Tom Ryder · Remus-Petru Pop · Nathan Hunt · Adam Kortylewski · Sophie Burkhardt · Mahmoud Elnaggar · Dieterich Lawson · Yifeng Li · Jongha (Jon) Ryu · Juhan Bae · Micha Livne · Tim Pearce · Mariia Vladimirova · Jason Ramapuram · Jiaming Zeng · Xinyu Hu · Jiawei He · Danielle Maddix · Arunesh Mittal · Albert Shaw · Tuan Anh Le · Alexander Sagel · Lisha Chen · Victor Gallego · Mahdi Karami · Zihao Zhang · Tal Kachman · Noah Weber · Matt Benatan · Kumar K Sricharan · Vincent Cartillier · Ivan Ovinnikov · Buu Phan · Mahmoud Hossam · Liu Ziyin · Valerii Kharitonov · Eugene Golikov · Qiang Zhang · Jae Myung Kim · Sebastian Farquhar · Jishnu Mukhoti · Xu Hu · Gregory Gundersen · Lavanya Sita Tekumalla · Paris Perdikaris · Ershad Banijamali · Siddhartha Jain · Ge Liu · Martin Gottwald · Katy Blumer · Sukmin Yun · Ranganath Krishnan · Roman Novak · Yilun Du · Yu Gong · Beliz Gokkaya · Jessica Ai · Daniel Duckworth · Johannes von Oswald · Christian Henning · Louis-Philippe Morency · Ali Ghodsi · Mahesh Subedar · Jean-Pascal Pfister · Rémi Lebret · Chao Ma · Aleksander Wieczorek · Laurence Perreault Levasseur