Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: from Theory to Practice

Auto-PINN: Understanding and Optimizing Physics-Informed Neural Architecture

Yicheng Wang · Xiaotian Han · Chia-Yuan Chang · Daochen Zha · Ulisses M. Braga-Neto · Xia Hu


Abstract:

Physics-Informed Neural Networks (PINNs) are revolutionizing science and engineering practices by harnessing the power of deep learning for scientific computation. The neural architecture's hyperparameters significantly impact the efficiency and accuracy of the PINN solver. However, optimizing these hyperparameters remains an open and challenging problem because of the large search space and the difficulty in identifying a suitable search objective for PDEs. In this paper, we propose Auto-PINN, the first systematic, automated hyperparameter optimization approach for PINNs, which employs Neural Architecture Search (NAS) techniques for PINN design. Auto-PINN avoids manually or exhaustively searching the hyperparameter space associated with PINNs. A comprehensive set of pre-experiments, using standard PDE benchmarks, enables us to probe the structure-performance relationship in PINNs. We discover that the different hyperparameters can be decoupled and that the training loss function of PINNs serves as an effective search objective. Comparison experiments with baseline methods demonstrate that Auto-PINN produces neural architectures with superior stability and accuracy over alternative baselines.

Chat is not available.