Timezone: »
The machine learning community is seeing an increased focus on fairness-oriented methods of model and dataset development. However, much of this work is constrained by a purely technical understanding of fairness -- an understanding that has come to mean parity of model performance across sociodemographic groups -- that offers a narrow way of understanding how machine learning technologies intersect with systems of oppression that structure their development and use in the real world. In contrast to this approach, we believe it is essential to approach machine learning technologies from a sociotechnical lens, examining how marginalized communities are excluded from their development and impacted by their deployment. Our tutorial will center the perspectives and stories of communities who have been harmed by machine learning technologies and the dominant logics operative within this field. We believe it is important to host these conversations from within the NeurIPS venue so that researchers and practitioners within the machine learning field can engage with these perspectives and understand the lived realities of marginalized communities impacted by the outputs of the field. In doing so, we hope to shift the focus away from singular technical understandings of fairness and towards justice, equity, and accountability. We believe this is a critical moment for machine learning practitioners and for the field as a whole to come together and reimagine what this field might look like. We have great faith in the machine learning community and hope that our tutorial will foster the difficult conversations and meaningful reflection upon the state of the field that is essential to begin constructing a different mode of operating. Our tutorial will highlight research on uncovering and mitigating issues of unfair bias and historical discrimination that machine learning systems learn to mimic and propagate. We will also highlight the lived realities of marginalized communities impacted by machine learning technologies. We will provide tutorial participants with tools and frameworks to incorporate into their own research practice that will facilitate socially aware work and help mitigate harmful impacts of their research.
Mon 1:00 p.m. - 2:00 p.m.
|
Machine learning in practice: Who is benefiting? Who is being harmed?
(
Talk
)
SlidesLive Video » |
Timnit Gebru 🔗 |
Mon 2:00 p.m. - 2:20 p.m.
|
Break
|
🔗 |
Mon 2:20 p.m. - 3:20 p.m.
|
Whose ground truth? Challenging the mythical objective, neutral standpoint
(
Talk
)
SlidesLive Video » |
Emily Denton 🔗 |
Mon 3:20 p.m. - 3:40 p.m.
|
Break
|
🔗 |
Mon 3:40 p.m. - 4:20 p.m.
|
Live Discussion and Q&A
(
Q&A
)
|
🔗 |
Mon 4:20 p.m. - 4:30 p.m.
|
Break
|
🔗 |
Mon 4:30 p.m. - 5:00 p.m.
|
Case Study
(
Talk
)
|
Timnit Gebru · Emily Denton 🔗 |
Author Information
Timnit Gebru (Black in AI)
Timnit Gebru was recently fired by Google for raising issues of discrimination in the workplace. Prior to that she was a co-lead of the Ethical AI research team at Google Brain. She received her PhD from the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li, and did a postdoc at Microsoft Research, New York City in the FATE (Fairness Accountability Transparency and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying projects aiming to gain insights from data. Timnit also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI.
Emily Denton (Google)
Emily Denton is a Research Scientist at Google where they examine the societal impacts of AI technology. Their recent research centers on critically examining the norms, values, and work practices that structure the development and use of machine learning datasets. Prior to joining Google, Emily received their PhD in machine learning from the Courant Institute of Mathematical Sciences at New York University, where they focused on unsupervised learning and generative modeling of images and video.
More from the Same Authors
-
2021 : Constructing a Visual Dataset to Study the Effects of Spatial Apartheid in South Africa »
Raesetje Sefala · Timnit Gebru · Luzango Mfupe · Nyalleng Moorosi · Richard Klein -
2021 : Case Study »
Timnit Gebru · Emily Denton -
2021 : Whose ground truth? Challenging the mythical objective, neutral standpoint »
Emily Denton -
2021 : Machine learning in practice: Who is benefiting? Who is being harmed? »
Timnit Gebru -
2020 : Strategies for anticipating and mitigating risks »
Ashley Casovan · Timnit Gebru · Shakir Mohamed · Solon Barocas · Aviv Ovadya -
2020 : Panel 1: Tensions & Cultivating Resistance AI »
Abeba Birhane · Timnit Gebru · Noopur Raval · Ramon Vilarino -
2018 : Bias and fairness in AI »
Timnit Gebru · Margaret Mitchell · Brittny-Jade E Saunders -
2017 Workshop: Learning Disentangled Features: from Perception to Control »
Emily Denton · Siddharth Narayanaswamy · Tejas Kulkarni · Honglak Lee · Diane Bouchacourt · Josh Tenenbaum · David Pfau -
2017 : Invited Talk »
Emily Denton -
2017 Poster: Unsupervised Learning of Disentangled Representations from Video »
Emily Denton · vighnesh Birodkar -
2017 Spotlight: Unsupervised Learning of Disentangled Representations from Video »
Emily Denton · vighnesh Birodkar -
2016 : Discussion panel »
Ian Goodfellow · Soumith Chintala · Arthur Gretton · Sebastian Nowozin · Aaron Courville · Yann LeCun · Emily Denton -
2015 Poster: Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks »
Emily Denton · Soumith Chintala · arthur szlam · Rob Fergus -
2014 Poster: Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation »
Emily Denton · Wojciech Zaremba · Joan Bruna · Yann LeCun · Rob Fergus