Extreme-ultraviolet images taken by the Atmospheric Imaging Assembly make it possible to use deep vision techniques in the prediction of solar wind speed - a difficult, high-impact, and unsolved problem. This study uses vision transformers and a set of methodological and modelling improvements to deliver an 11.1% lower RMSE error, and a 17.4% higher prediction correlation compared to the previous state of the art models. Furthermore, our analysis shows that vision transformers combined with our pipeline consistently outperform convolutional alternatives. Additionally, the best vision transformer outperforms the best convolutional model by 1.8% in RMSE and 2.6% in correlation with the ground truth solar wind speed.
Filip Svoboda (University of Cambridge)
Edward Brown (Cambridge University)
More from the Same Authors
2021 : Learning the solar latent space: sigma-variational autoencoders for multiple channel solar imaging »
Edward Brown · Christopher Bridges · Bernard Benson · Atilim Gunes Baydin
2021 : Dropout and Ensemble Networks for Thermospheric Density Uncertainty Estimation »
Stefano Bonasera · Giacomo Acciarini · Jorge Pérez-Hernández · Bernard Benson · Edward Brown · Eric Sutton · Moriba Jah · Christopher Bridges · Atilim Gunes Baydin