Skip to yearly menu bar Skip to main content


Poster

Learning to Pivot with Adversarial Networks

Gilles Louppe · Michael Kagan · Kyle Cranmer

Pacific Ballroom #105

Keywords: [ Regularization ] [ Fairness, Accountability, and Transparency ] [ Regression ] [ Classification ] [ Adversarial Networks ] [ Information Theory ]


Abstract:

Several techniques for domain adaptation have been proposed to account for differences in the distribution of the data used for training and testing. The majority of this work focuses on a binary domain label. Similar problems occur in a scientific context where there may be a continuous family of plausible data generation processes associated to the presence of systematic uncertainties. Robust inference is possible if it is based on a pivot -- a quantity whose distribution does not depend on the unknown values of the nuisance parameters that parametrize this family of data generation processes. In this work, we introduce and derive theoretical results for a training procedure based on adversarial networks for enforcing the pivotal property (or, equivalently, fairness with respect to continuous attributes) on a predictive model. The method includes a hyperparameter to control the trade-off between accuracy and robustness. We demonstrate the effectiveness of this approach with a toy example and examples from particle physics.

Live content is unavailable. Log in and register to view live content