Timezone: »
This interdisciplinary workshop will consider issues of fairness, accountability, and transparency in machine learning. It will address growing anxieties about the role that machine learning plays in consequential decision-making in such areas as commerce, employment, healthcare, education, and policing.
Reflecting these concerns, President Obama at the start of 2014 called for a 90-day review of Big Data. The resulting report, "Big Data: Seizing Opportunities, Preserving Values", concluded that "big data technologies can cause societal harms beyond damages to privacy". It voiced particular concern about the possibility that decisions informed by big data could have discriminatory effects, even in the absence of discriminatory intent, and could further subject already disadvantaged groups to less favorable treatment. It also expressed alarm about the threat that an "opaque decision-making environment" and an "impenetrable set of algorithms" pose to autonomy. In its recommendations to the President, the report called for additional "technical expertise to stop discrimination", and for further research into the dangers of "encoding discrimination in automated decisions".
Our workshop takes up this call. It will focus on these issues both as challenging constraints on the practical application of machine learning, as well as problems that can lend themselves to novel computational solutions.
Questions to the machine learning community include:
• How can we achieve high classification accuracy while eliminating discriminatory biases? What are meaningful formal fairness properties?
• How can we design expressive yet easily interpretable classifiers?
• Can we ensure that a classifier remains accurate even if the statistical signal it relies on is exposed to public scrutiny?
Are there practical methods to test existing classifiers for compliance with a policy?
Participants will work together to understand the key normative and legal issues at stake, map the relevant computer science scholarship, evaluate the state of the solutions thus far proposed, and explore opportunities for new research and thinking within machine learning itself.
Author Information
Moritz Hardt (UC Berkeley)
Solon Barocas (Microsoft Research)
More from the Same Authors
-
2022 : Causal Inference out of Control: Identifying the Steerability of Consumption »
Gary Cheng · Moritz Hardt · Celestine Mendler-Dünner -
2022 : Causal Inference out of Control: Identifying the Steerability of Consumption »
Gary Cheng · Moritz Hardt · Celestine Mendler-Dünner -
2017 Poster: Avoiding Discrimination through Causal Reasoning »
Niki Kilbertus · Mateo Rojas Carulla · Giambattista Parascandolo · Moritz Hardt · Dominik Janzing · Bernhard Schölkopf -
2016 Poster: Equality of Opportunity in Supervised Learning »
Moritz Hardt · Eric Price · Eric Price · Nati Srebro -
2015 Workshop: Adaptive Data Analysis »
Adam Smith · Aaron Roth · Vitaly Feldman · Moritz Hardt -
2015 Poster: Generalization in Adaptive Data Analysis and Holdout Reuse »
Cynthia Dwork · Vitaly Feldman · Moritz Hardt · Toni Pitassi · Omer Reingold · Aaron Roth -
2015 Poster: Differentially Private Learning of Structured Discrete Distributions »
Ilias Diakonikolas · Moritz Hardt · Ludwig Schmidt -
2014 Poster: The Noisy Power Method: A Meta Algorithm with Applications »
Moritz Hardt · Eric Price -
2014 Spotlight: The Noisy Power Method: A Meta Algorithm with Applications »
Moritz Hardt · Eric Price