Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: The Symbiosis of Deep Learning and Differential Equations -- III

Can Physics informed Neural Operators self improve?

Ritam Majumdar · Amey Varhade · Shirish Karande · Lovekesh Vig

Keywords: [ differential equations ] [ neural operators ] [ Self training ] [ Semi-Supervised Learning ]

[ ] [ Project Page ]
Sat 16 Dec 9:45 a.m. PST — 10 a.m. PST

Abstract: Self-training techniques have shown remarkable value across many deep learning models and tasks. However, such techniques remain largely unexplored when considered in the context of learning fast solvers for systems of partial differential equations (Eg: Neural Operators). In this work, we explore the use of self-training for Fourier Neural Operators (FNO). Neural Operators emerged as a data driven technique, however, data from experiments or traditional solvers is not always readily available. Physics Informed Neural Operators (PINO) overcome this constraint by utilizing a physics loss for the training, however the accuracy of PINO trained without data does not match the performance obtained by training with data. In this work we show that self-training can be used to close this gap in performance. We examine canonical examples, namely the 1D-Burgers and 2D-Darcy PDEs, to showcase the efficacy of self-training. Specifically, FNOs, when trained exclusively with physics loss through self-training, approach $1.07\times$ for Burgers and $1.02\times$ for Darcy, compared to FNOs trained with both data and physics loss. Furthermore, we discover that pseudo-labels can be used for self-training without necessarily training to convergence in each iteration. A consequence of this is that we are able to discover self-training schedules that improve upon the baseline performance of PINO in terms of accuracy as well as time.

Chat is not available.