Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning and the Physical Sciences

Adversarial Noise Injection for Learned Turbulence Simulations

Jingtong Su · Julia Kempe · Drummond Fielding · Nikolaos Tsilivis · Miles Cranmer · Shirley Ho


Abstract:

Machine learning is a powerful way to learn effective dynamics of physical simulations, and has seen great interest from the community in recent years. Recent work has shown that deep neural networks trained in an end-to-end manner seem capable to learn to predict turbulent dynamics on coarse grids more accurately than classical solvers. All these works point out that adding Gaussian noise to the input during training is indispensable to improve the stability and roll-out performance of learned simulators, as an alternative to training through multiple steps. In this work we bring insights from robust machine learning and propose to inject adversarial noise to bring machine learning systems a step further towards improving generalization in ML-assisted physical simulations. We advocate that training our models on these worst case perturbation instead of model-agnostic Gaussian noise might lead to better rollout and hope that adversarial noise injection becomes a standard tool for ML-based simulations. We show experimentally in the 2D-setting that for certain classes of turbulence adversarial noise can help stabilize model rollouts, maintain a lower loss and preserve other physical properties such as energy. In addition, we identify a potentially more challenging task, driven 2D-turbulence and show that while none of the noise-based attempts significantly improve rollout, adversarial noise helps.

Chat is not available.