Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop: Machine Learning and the Physical Sciences

Efficient and Robust Jet Tagging at the LHC with Knowledge Distillation

Ryan Liu · Abhijith Gandrakota · Jennifer Ngadiuba · jean-roch vlimant · Maria Spiropulu


Abstract:

The challenging environment of real-time data processing systems at the Large Hadron Collider (LHC) strictly limits the computational complexity of algorithms that can be deployed. For deep learning models, this implies that only small models that have low capacity and weak inductive bias are feasible. To address this issue, we utilize knowledge distillation to leverage both the performance of large models and the reduced computational complexity of small ones. In this paper, we present an implementation of knowledge distillation, demonstrating an overall boost in the student models' performance for the task of classifying jets at the LHC. Furthermore, by using a teacher model with a strong inductive bias of Lorentz symmetry, we show that we can induce the same inductive bias in the student model which leads to better robustness against arbitrary Lorentz boost.

Chat is not available.