We describe an approach to learning optimal control policies for a large, linear particle accelerator that uses a powerful AI-based approach using deep reinforcement learning coupled with a high-fidelity physics engine. The framework consists of an AI controller that uses deep neural nets for state and action-space representation and learns optimal policies using reward signals that are provided by the physics simulator. For this work, we only focus on controlling a small section of the entire accelerator. Nevertheless, initial results indicate that we can achieve better-than-human level performance in terms of particle beam current and distribution. The ultimate goal of this line of wok is to substantially reduce the tuning time for such facilities by orders of magnitude, and achieve near-autonomous control.
Xiaoying Pang (Apple)
Sunil Thulasidasan (Los Alamos National Laboratory & University of Washington)
Larry Rybarcyk (Los Alamos National Laboratory)
More from the Same Authors
2019 Poster: On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks »
Sunil Thulasidasan · Gopinath Chennupati · Jeffrey A Bilmes · Tanmoy Bhattacharya · Sarah Michalak