Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop: Machine Learning and the Physical Sciences

Loss Functionals for Learning Likelihood Ratios

Shahzar Rizvi · Mariel Pettee · Benjamin Nachman


Abstract:

The likelihood ratio is a crucial quantity for statistical inference that enables hypothesis testing, construction of confidence intervals, reweighting of distributions, and more. For modern data- or simulation-driven scientific research, however, computing the likelihood ratio can be difficult or even impossible. Approximations of the likelihood ratio may be computed using parametrizations of neural network-based classifiers. By evaluating four losses in approximating the likelihood ratio of univariate Gaussians and simulated high-energy particle physics datasets, we recommend particular configurations for each loss and propose a strategy to scan over generalized loss families for the best overall performance.

Chat is not available.