Timezone: »
Abstract
Ethics is the philosophy of human conduct: It addresses the question “how should we act?” Throughout most of history the repertoire of actions available to us was limited and their consequences constrained in scope and impact through dispersed power structures and slow trade. Today, in our globalised and networked world, a decision can affect billions of people instantaneously and have tremendously complex repercussions. Machine learning algorithms are replacing humans in making many of the decisions that affect our everyday lives. How can we decide how machine learning algorithms and their designers should act? What is the ethics of today and what will it be in the future?
In this one day workshop we will explore the interaction of AI, society, and ethics through three general themes.
Advancing and Connecting Theory: How do different fairness metrics relate to one another? What are the trade-offs between them? How do fairness, accountability, transparency, interpretability and causality relate to ethical decision making? What principles can we use to guide us in selecting fairness metrics within a given context? Can we connect these principles back to ethics in philosophy? Are these principles still relevant today?
Tools and Applications: Real-world examples of how ethical considerations are affecting the design of ML systems and pipelines. Applications of algorithmic fairness, transparency or interpretability to produce better outcomes. Tools that aid identifying and or alleviating issues such as bias, discrimination, filter bubbles, feedback loops etc. and enable actionable exploration of the resulting trade-offs.
Regulation: With the GDPR coming into force in May 2018 it is the perfect time to examine how regulation can help (or hinder) our efforts to deploy AI for the benefit of society. How are companies and organisations responding to the GDPR? What aspects are working and what are the challenges? How can regulatory or legal frameworks be designed to continue to encourage innovation, so society as a whole can benefit from AI, whilst still providing protection against its harms.
This workshop is designed to be focused on some of the larger ethical issues related to AI and can be seen as a complement to the FATML proposal, which is focused more on fairness, transparency and accountability. We would be happy to link or cluster the workshops together, but we (us and the FATML organizers) think that there is more than 2 day worth of material that the community needs to discuss in the area of AI and ethics, so it would be great to have both workshops if possible.
Fri 5:20 a.m. - 5:30 a.m.
|
Welcome and organisers comments
(
Introduction
)
|
Chloé Bakalar · Finnian Lattimore · Sarah Bird · Sendhil Mullainathan 🔗 |
Fri 5:30 a.m. - 6:00 a.m.
|
Jon Kleinberg - Fairness, Simplicity, and Ranking
(
Invited Talk
)
Recent discussion in the public sphere about classification by algorithms has involved tension between competing notions of what it means for such a classification to be fair to different groups. We consider several of the key fairness conditions that lie at the heart of these debates. In particular, we study how these properties operate when the goal is to rank-order a set of applicants by some criterion of interest, and then to select the top-ranking applicants. Among other results, we show that imposing a constraint to favor "simple" rules -- for example, to promote interpretability -- can have consequences for the equity of the ranking toward disadvantaged groups. |
Jon Kleinberg 🔗 |
Fri 6:00 a.m. - 6:30 a.m.
|
Rich Caruna - Justice May Be Blind But It Shouldn’t Be Opaque: The Risk of Using Black-Box Models in Healthcare & Criminal Justice
(
Invited Talk
)
In machine learning often a tradeoff must be made between accuracy and intelligibility. This tradeoff sometimes limits the accuracy of models that can be safely deployed in mission-critical applications such as healthcare and criminal justice where being able to understand, validate, edit, and ultimately trust a learned model is important. In this talk I’ll present a case study where intelligibility is critical to uncover surprising patterns in the data that would have made deploying a black-box model dangerous. I’ll also show how distillation with intelligible models can be used to detect bias inside black-box models. |
Rich Caruana 🔗 |
Fri 6:30 a.m. - 7:00 a.m.
|
Hoda Heidari - What Can Fair ML Learn from Economic Theories of Distributive Justice?
(
Invited Talk
)
Recently, a number of technical solutions have been proposed for tackling algorithmic unfairness and discrimination. I will talk about some of the connections between these proposals and to the long-established economic theories of fairness and distributive justice. In particular, I will overview the axiomatic characterization of measures of (income) inequality, and present them as a unifying framework for quantifying individual- and group-level unfairness; I will propose the use of cardinal social welfare functions as an an effective method for bounding individual-level inequality; and last but not least, I will cast existing notions of algorithmic (un)fairness as special cases of economic models of equality of opportunity---through this lens, I hope to offer a better understanding of the moral assumptions underlying technical definitions of fairness. |
🔗 |
Fri 7:00 a.m. - 7:20 a.m.
|
Poster Spotlights 1
(
Spotlight talks
)
|
🔗 |
Fri 7:20 a.m. - 8:30 a.m.
|
Posters 1
(
Poster Session
)
|
Wei Wei · Flavio Calmon · Travis Dick · Leilani Gilpin · Maroussia Lévesque · Malek Ben Salem · Michael Wang · Jack Fitzsimons · Dimitri Semenovich · Linda Gu · Nathaniel Fruchter
|
Fri 8:30 a.m. - 8:50 a.m.
|
BriarPatches: Pixel-Space Interventions for Inducing Demographic Parity
(
Contributed Talk
)
We introduce the BriarPatch, a pixel-space intervention that obscures sensitive attributes from representations encoded in pre-trained classifiers. The patches encourage internal model representations not to encode sensitive information, which has the effect of pushing downstream predictors towards exhibiting demographic parity with respect to the sensitive information. The net result is that these BriarPatches provide an intervention mechanism available at user level, and complements prior research on fair representations that were previously only applicable by model developers and ML experts. |
🔗 |
Fri 8:50 a.m. - 9:10 a.m.
|
Temporal Aspects of Individual Fairness
(
Contributed Talk
)
The concept of individual fairness advocates similar treatment of similar individuals to ensure equality in treatment \cite{Dwork2012}. In this paper, we extend this notion to account for the time at which a decision is made, in settings where there exists a notion of "conduciveness" of decisions as perceived by individuals. We introduce two definitions: (i) fairness-across-time and (ii) fairness-in-hindsight. In the former, treatments of individuals are required to be individually fair relative to the past as well as future, while in the latter we only require individual fairness relative to the past. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model: one can achieve a vanishing asymptotic loss in long-run average utility relative to the full-information optimum under the fairness-in-hindsight constraint, whereas this asymptotic loss can be bounded away from zero under the fairness-across-time constraint. |
🔗 |
Fri 9:10 a.m. - 9:30 a.m.
|
Explaining Explanations to Society
(
Contributed Talk
)
There is a disconnect between explanatory artificial intelligence (XAI) methods for deep neural networks and the types of explanations that are useful for and demanded by society (policy makers, government officials, etc.) Questions that experts in artificial intelligence (AI) ask opaque systems provide inside explanations, focused on debugging, reliability, and validation. These are different from those that society will ask of these systems to build trust and confidence in their decisions. Although explanatory AI systems can answer many questions that experts desire, they often don’t explain why they made decisions in a way that is precise (true to the model) and understandable to humans. These outside explanations can be used to build trust, comply with regulatory and policy changes, and act as external validation. In this paper, we explore the types of questions that explanatory deep neural network (DNN) systems can answer and discuss challenges inherent in building explanatory systems that provide outside explanations of systems for societal requirements and benefit. |
🔗 |
Fri 9:30 a.m. - 11:00 a.m.
|
Lunch
|
🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Hanna Wallach - Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?
(
Invited Talk
)
The potential for machine learning systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent research has focused on the development of algorithmic tools to detect and mitigate such unfairness. However, if these tools are to have a positive impact on industry practice, it is crucial that their design be informed by an understanding of industry teams’ actual needs. Through semi-structured interviews with 35 machine learning practitioners, spanning 19 teams and 10 companies, and an anonymous survey of 267 practitioners, we conducted the first systematic investigation of industry teams' challenges and needs for support in developing fairer machine learning systems. I will describe this work and summarize areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the academic literature. Based on these findings, I will highlight directions for future research that will better address practitioners' needs. |
Hanna Wallach 🔗 |
Fri 11:30 a.m. - 12:00 p.m.
|
Rolle Dobbe - Ethics & Accountability in AI and Algorithmic Decision Making Systems - There's No Such Thing As A Free Lunch
(
Invited Talk
)
Addressing a rapidly growing public awareness about bias and fairness issues in algorithmic decision-making systems (ADS), the tech industry is now championing a set of tools to assess and mitigate these. Such tools, broadly categorized as algorithmic fairness definitions, metrics and mitigation strategies find their roots in recent research from the community on Fairness, Accountability and Transparency in Machine Learning (FAT/ML), which started convening in 2014 at popular machine learning conferences, and has since been succeeded by a broader conference on Fairness, Accountability and Transparency in Sociotechnical Systems (FAT*). Whereas there is value in this research to assist diagnosis and informed debate about the inherent trade-offs and ethical choices that come with data-driven approaches to policy and decision-making, marketing poorly validated tools as quick fix strategies to eliminate bias is problematic and threatens to deepen an already growing sense of distrust among companies and institutions procuring data analysis software and enterprise platforms. This trend is coinciding with efforts by the IEEE and others to develop certification and marking processes that "advance transparency, accountability and reduction in algorithmic bias in Autonomous and Intelligent Systems". These efforts combined suggest a checkbox recipe for improving accountability and resolving the many ethical issues that have surfaced in the rapid deployment of ADS. In this talk, we nuance this timely debate by pointing at the inherent technical limitations of fairness metrics as a go-to tool for fixing bias. We discuss earlier attempts of certification to clarify pitfalls. We refer to developments in governments adopting ADS systems and how a lack of accountability and existing power structures are leading to new forms of harm that question the very efficacy of ADS. We end with discussing productive uses of diagnostic tools and the concept of Algorithmic Impact Assessment as a new framework for identifying the value, limitations and challenges of integrating algorithms in real world contexts. |
🔗 |
Fri 12:00 p.m. - 12:20 p.m.
|
Poster Spotlights 2
(
Spotlight talks
)
|
🔗 |
Fri 12:20 p.m. - 1:30 p.m.
|
Posters 2
(
Poster session
)
|
🔗 |
Fri 1:30 p.m. - 2:00 p.m.
|
Manuel Gomez Rodriguez - Enhancing the Accuracy and Fairness of Human Decision Making
(
Invited Talk
)
Societies often rely on human experts to take a wide variety of decisions affecting their members, from jail-or-release decisions taken by judges and stop-and-frisk decisions taken by police officers to accept-or-reject decisions taken by academics. In this context, each decision is taken by an expert who is typically chosen uniformly at random from a pool of experts. However, these decisions may be imperfect due to limited experience, implicit biases, or faulty probabilistic reasoning. Can we improve the accuracy and fairness of the overall decision making process by optimizing the assignment between experts and decisions? In this talk, we address the above problem from the perspective of sequential decision making and show that, for different fairness notions from the literature, it reduces to a sequence of (constrained) weighted bipartite matchings, which can be solved efficiently using algorithms with approximation guarantees. Moreover, these algorithms also benefit from posterior sampling to actively trade off exploitation---selecting expert assignments which lead to accurate and fair decisions---and exploration---selecting expert assignments to learn about the experts' preferences and biases. We demonstrate the effectiveness of our algorithms on both synthetic and real-world data and show that they can significantly improve both the accuracy and fairness of the decisions taken by pools of experts. |
Manuel Rodriguez 🔗 |
Fri 2:00 p.m. - 2:45 p.m.
|
Discussion Panel
|
🔗 |
Author Information
Chloe Bakalar (Princeton University)
Sarah Bird (Facebook AI Research)
Sarah leads research and emerging technology strategy for Azure AI. Sarah works to accelerate the adoption and impact of AI by bringing together the latest innovations research with the best of open source and product expertise to create new tools and technologies. Sarah is currently leading the development of responsible AI tools in Azure Machine Learning. She is also an active member of the Microsoft AETHER committee, where she works to develop and drive company-wide adoption of responsible AI principles, best practices, and technologies. Sarah was one of the founding researchers in the Microsoft FATE research group and prior to joining Microsoft worked on AI fairness in Facebook. Sarah is active contributor to the open source ecosystem, she co-founded ONNX, an open source standard for machine learning models and was a leader in the Pytorch 1.0 project. She was an early member of the machine learning systems research community and has been active in growing and forming the community. She co-founded the SysML research conference and the Learning Systems workshops. She has a Ph.D. in computer science from UC Berkeley advised by Dave Patterson, Krste Asanovic, and Burton Smith.
Tiberio Caetano (Gradient Institute)
Edward W Felten (Princeton University)
Edward W. Felten is the Robert E. Kahn Professor of Computer Science and Public Affairs at Princeton University, and the founding Director of Princeton's Center for Information Technology Policy. He is a member of the United States Privacy and Civil Liberties Oversight Board. In 2015-2017 he served in the White House as Deputy U.S. Chief Technology Officer. In 2011-12 he served as the first Chief Technologist at the U.S. Federal Trade Commission. His research interests include computer security and privacy, and technology law and policy. He has published more than 150 papers in the research literature, and three books. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences, and is a Fellow of the ACM.
Dario Garcia (Facebook)
Isabel Kloumann (Facebook)
Finnian Lattimore (The Gradient Institute)
Sendhil Mullainathan (University of Chicago)
D. Sculley (Google Research)
More from the Same Authors
-
2021 : Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning »
Zachary Nado · Neil Band · Mark Collier · Josip Djolonga · Mike Dusenberry · Sebastian Farquhar · Qixuan Feng · Angelos Filos · Marton Havasi · Rodolphe Jenatton · Ghassen Jerfel · Jeremiah Liu · Zelda Mariet · Jeremy Nixon · Shreyas Padhy · Jie Ren · Tim G. J. Rudner · Yeming Wen · Florian Wenzel · Kevin Murphy · D. Sculley · Balaji Lakshminarayanan · Jasper Snoek · Yarin Gal · Dustin Tran -
2021 : Technical Debt in ML: A Data-Centric View »
D. Sculley -
2020 : Keynote: Sendhil Mullainathan »
Sendhil Mullainathan -
2019 : Coffee Break and Poster Session »
Rameswar Panda · Prasanna Sattigeri · Kush Varshney · Karthikeyan Natesan Ramamurthy · Harvineet Singh · Vishwali Mhasawade · Shalmali Joshi · Laleh Seyyed-Kalantari · Matthew McDermott · Gal Yona · James Atwood · Hansa Srinivasan · Yonatan Halpern · D. Sculley · Behrouz Babaki · Margarida Carvalho · Josie Williams · Narges Razavian · Haoran Zhang · Amy Lu · Irene Y Chen · Xiaojie Mao · Angela Zhou · Nathan Kallus -
2019 Workshop: Fair ML in Healthcare »
Shalmali Joshi · Irene Y Chen · Ziad Obermeyer · Shems Saleh · Sendhil Mullainathan -
2019 Poster: Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift »
Jasper Snoek · Yaniv Ovadia · Emily Fertig · Balaji Lakshminarayanan · Sebastian Nowozin · D. Sculley · Joshua Dillon · Jie Ren · Zachary Nado -
2018 : On Avoiding Tragedy of the Commons in the Peer Review Process »
D. Sculley -
2018 : Poster Session 1 (note there are numerous missing names here, all papers appear in all poster sessions) »
Akhilesh Gotmare · Kenneth Holstein · Jan Brabec · Michal Uricar · Kaleigh Clary · Cynthia Rudin · Sam Witty · Andrew Ross · Shayne O'Brien · Babak Esmaeili · Jessica Forde · Massimo Caccia · Ali Emami · Scott Jordan · Bronwyn Woods · D. Sculley · Rebekah Overdorf · Nicolas Le Roux · Peter Henderson · Brandon Yang · Tzu-Yu Liu · David Jensen · Niccolo Dalmasso · Weitang Liu · Paul Marc TRICHELAIR · Jun Ki Lee · Akanksha Atrey · Matt Groh · Yotam Hechtlinger · Emma Tosch -
2018 : Welcome »
Sarah Bird -
2018 : Welcome and organisers comments »
Chloé Bakalar · Finnian Lattimore · Sarah Bird · Sendhil Mullainathan -
2018 : InclusiveImages: Competitor Presentations »
Yonatan Halpern · Pallavi Baljekar · D. Sculley · Pavel Ostyakov · Nawazuddin Mohammed · Weimin Wang · David Austin -
2018 Workshop: MLSys: Workshop on Systems for ML and Open Source Software »
Aparna Lakshmiratan · Sarah Bird · Siddhartha Sen · Joseph Gonzalez · Daniel Crankshaw -
2018 Invited Talk: Machine Learning Meets Public Policy: What to Expect and How to Cope »
Edward W Felten -
2017 : Invited Talk: Creating an Open and Flexible ecosystem for AI models with ONNX, Sarah Bird, Dmytro Dzhulgakov, Facebook Research »
Sarah Bird -
2017 Workshop: ML Systems Workshop @ NIPS 2017 »
Aparna Lakshmiratan · Sarah Bird · Siddhartha Sen · Christopher Ré · Li Erran Li · Joseph Gonzalez · Daniel Crankshaw -
2016 Workshop: Large Scale Computer Vision Systems »
Manohar Paluri · Lorenzo Torresani · Gal Chechik · Dario Garcia · Du Tran -
2016 : TensorFlow Debugger: Debugging Dataflow Graphs for Machine Learning »
D. Sculley -
2016 : What's your ML Test Score? A rubric for ML production systems »
D. Sculley -
2016 Poster: Causal Bandits: Learning Good Interventions via Causal Inference »
Finnian Lattimore · Tor Lattimore · Mark Reid -
2015 Poster: Hidden Technical Debt in Machine Learning Systems »
D. Sculley · Gary Holt · Daniel Golovin · Eugene Davydov · Todd Phillips · Dietmar Ebner · Vinay Chaudhary · Michael Young · Jean-François Crespo · Dan Dennison -
2014 Poster: (Almost) No Label No Cry »
Giorgio Patrini · Richard Nock · Tiberio Caetano · Paul Rivera -
2014 Spotlight: (Almost) No Label No Cry »
Giorgio Patrini · Richard Nock · Tiberio Caetano · Paul Rivera -
2014 Session: Oral Session 2 »
D. Sculley -
2012 Poster: Learning as MAP Inference in Discrete Graphical Models »
Tiberio Caetano · Xianghang Liu · James Petterson -
2012 Poster: A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation »
Aaron Defazio · Tiberio Caetano -
2012 Session: Oral Session 8 »
Tiberio Caetano -
2012 Spotlight: A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation »
Aaron Defazio · Tiberio Caetano -
2011 Workshop: Philosophy and Machine Learning »
Marcello Pelillo · Joachim M Buhmann · Tiberio Caetano · Bernhard Schölkopf · Larry Wasserman -
2011 Poster: Submodular Multi-Label Learning »
James Petterson · Tiberio Caetano -
2010 Poster: Word Features for Latent Dirichlet Allocation »
James Petterson · Alexander Smola · Tiberio Caetano · Wray L Buntine · Shravan M Narayanamurthy -
2010 Poster: Reverse Multi-Label Learning »
James Petterson · Tiberio Caetano -
2010 Poster: Multitask Learning without Label Correspondences »
Novi Quadrianto · Alexander Smola · Tiberio Caetano · S.V.N. Vishwanathan · James Petterson -
2009 Workshop: Learning with Orderings »
Tiberio Caetano · Carlos Guestrin · Jonathan Huang · Risi Kondor · Guy Lebanon · Marina Meila -
2009 Poster: Convex Relaxation of Mixture Regression with Efficient Algorithms »
Novi Quadrianto · Tiberio Caetano · John Lim · Dale Schuurmans -
2009 Poster: Exponential Family Graph Matching and Ranking »
James Petterson · Tiberio Caetano · Julian J McAuley · Jin Yu -
2008 Poster: Robust Near-Isometric Matching via Structured Learning of Graphical Models »
Julian J McAuley · Tiberio Caetano · Alexander Smola