Timezone: »

 
Tutorial
Beyond Fairness in Machine Learning
Timnit Gebru · Emily Denton

Mon Dec 06 01:00 PM -- 05:00 PM (PST) @

The machine learning community is seeing an increased focus on fairness-oriented methods of model and dataset development. However, much of this work is constrained by a purely technical understanding of fairness -- an understanding that has come to mean parity of model performance across sociodemographic groups -- that offers a narrow way of understanding how machine learning technologies intersect with systems of oppression that structure their development and use in the real world. In contrast to this approach, we believe it is essential to approach machine learning technologies from a sociotechnical lens, examining how marginalized communities are excluded from their development and impacted by their deployment. Our tutorial will center the perspectives and stories of communities who have been harmed by machine learning technologies and the dominant logics operative within this field. We believe it is important to host these conversations from within the NeurIPS venue so that researchers and practitioners within the machine learning field can engage with these perspectives and understand the lived realities of marginalized communities impacted by the outputs of the field. In doing so, we hope to shift the focus away from singular technical understandings of fairness and towards justice, equity, and accountability. We believe this is a critical moment for machine learning practitioners and for the field as a whole to come together and reimagine what this field might look like. We have great faith in the machine learning community and hope that our tutorial will foster the difficult conversations and meaningful reflection upon the state of the field that is essential to begin constructing a different mode of operating. Our tutorial will highlight research on uncovering and mitigating issues of unfair bias and historical discrimination that machine learning systems learn to mimic and propagate. We will also highlight the lived realities of marginalized communities impacted by machine learning technologies. We will provide tutorial participants with tools and frameworks to incorporate into their own research practice that will facilitate socially aware work and help mitigate harmful impacts of their research.

Author Information

Timnit Gebru (Black in AI)

Timnit Gebru was recently fired by Google for raising issues of discrimination in the workplace. Prior to that she was a co-lead of the Ethical AI research team at Google Brain. She received her PhD from the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li, and did a postdoc at Microsoft Research, New York City in the FATE (Fairness Accountability Transparency and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying projects aiming to gain insights from data. Timnit also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI.

Emily Denton (Google)

Emily Denton is a Research Scientist at Google where they examine the societal impacts of AI technology. Their recent research centers on critically examining the norms, values, and work practices that structure the development and use of machine learning datasets. Prior to joining Google, Emily received their PhD in machine learning from the Courant Institute of Mathematical Sciences at New York University, where they focused on unsupervised learning and generative modeling of images and video.

More from the Same Authors