Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: Progress and Promises

Towards Neural Variational Monte Carlo That Scales Linearly with System Size

Or Sharir · Garnet Chan · Anima Anandkumar

Keywords: [ variational monte carlo ] [ quantum ] [ many-body ]


Abstract: Quantum many-body problems are some of the most challenging problems in science and are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors. The combination of neural networks (NN) for representing quantum states, coupled with the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems. However, the run-time of this approach scales quadratically with the number of simulated particles, constraining the practically usable NN to — in machine learning terms — minuscule sizes (<10M parameters). Considering the many breakthroughs brought by extreme NN in the +1B parameters scale to other domains, lifting this constraint could significantly expand the set of quantum systems we can accurately simulate on classical computers, both in size and complexity. We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm — the source of the quadratic scaling. In our preliminary experiments, we demonstrate VQ-NQS ability to reproduce the ground state of the 2D Heisenberg model across various system sizes, while reporting a significant reduction of up to ${\times}5$ in the number of FLOPs in the local-energy calculation.

Chat is not available.