Timezone: »
In this paper, we focus on auditing black-box prediction models for compliance with the GDPR’s data minimization principle. This principle restricts prediction models to use the minimal information that is necessary for performing the task at hand. Given the challenge of the black-box setting, our key idea is to check if each of the prediction model’s input features is individually necessary by assigning it some constant value (i.e., applying a simple imputation) across all prediction instances, and measuring the extent to which the model outcomes would change. We introduce a metric for data minimization that is based on model instability under simple imputations. We extend the applicability of this metric from a finite sample model to a distributional setting by introducing a probabilistic data minimization guarantee, which we derive using a Bayesian approach. Furthermore, we address the auditing problem under a constraint on the number of queries to the prediction system. We formulate the problem of allocating a budget of system queries to feasible simple imputations (for investigating model instability) as a multi-armed bandit framework with probabilistic success metrics. We define two bandit problems for providing a probabilistic data minimization guarantee at a given confidence level: a decision problem given a data minimization level, and a measurement problem given a fixed query budget. We design efficient algorithms for these auditing problems using novel exploration strategies that expand classical bandit strategies. Our experiments with real-world prediction systems show that our auditing algorithms significantly outperform simpler benchmarks in both measurement and decision problems.
Author Information
Bashir Rastegarpanah (Boston University)
Krishna Gummadi (Max Planck Institute for Software Systems)
Mark Crovella (Boston University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Auditing Black-Box Prediction Models for Data Minimization Compliance »
Dates n/a. Room
More from the Same Authors
-
2023 Poster: Diffused Redundancy in Pre-trained Representations »
Vedant Nanda · Till Speicher · John Dickerson · Krishna Gummadi · Soheil Feizi · Adrian Weller -
2019 : Invited talk #1 »
Krishna Gummadi -
2018 Poster: Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making »
Hoda Heidari · Claudio Ferrari · Krishna Gummadi · Andreas Krause -
2017 Poster: From Parity to Preference-based Notions of Fairness in Classification »
Muhammad Bilal Zafar · Isabel Valera · Manuel Rodriguez · Krishna Gummadi · Adrian Weller