Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop: Machine Learning and the Physical Sciences

Lensformer: A Physics-Informed Vision Transformer for Gravitational Lensing

Lucas José Velôso de Souza · Michael Toomey · Sergei Gleyzer


Abstract:

We introduce Lensformer, a state-of-the-art transformer architecture that incorporates the lens equations directly into the architecture for the purpose of studying dark matter in the context of strong gravitation lensing. This architecture combines the strengths of Transformer models from natural language processing with the analytical rigor of Physics-Informed Neural Networks (PINNs). By putting the lensing equation into the design of the architecture, Lensformer is able to accurately approximate the gravitational potential of the lensing galaxy. The physics-based features are then integrated into a Vision Transformer (ViT) neural network, which helps to provide a nuanced understanding when applied to various problems related to strong lensing. In this work we consider a toy example of classifying between simulations of different models of dark matter. To validate the model, we benchmark Lensformer against other leading architectures and demonstrate that it exhibits superior performance.

Chat is not available.