Timezone: »
Building trustworthy, effective, and responsible machine learning systems hinges on understanding how differences in training data and modeling decisions interact to impact predictive performance. In this work, we seek to better understand how we might characterize, detect, and design for data-model synergies. We focus on a particular type of data-model inefficiency, in which adding training data from some sources can actually lower performance evaluated on key sub-groups of the population, a phenomenon we refer to as negative data externalities on group performance. Such externalities can arise in standard learning settings and can manifest differently depending on conditions between training set size and model size. Data externalities directly imply a lower bound on feasible model improvements, yet improving models efficiently requires understanding the underlying data-model tensions. From a broader perspective, our results indicate that data-efficiency is a key component of both accurate and trustworthy machine learning
Author Information
Esther Rolf (Harvard University)
Ben Packer (Stanford University)
Alex Beutel (Google Research)
Fernando Diaz (Google)
Fernando Diaz is a research scientist at Google Brain Montréal. His research focuses on the design of information access systems, including search engines, music recommendation services and crisis response platforms is particularly interested in understanding and addressing the societal implications of artificial intelligence more generally. Previously, Fernando was the assistant managing director of Microsoft Research Montréal and a director of research at Spotify, where he helped establish its research organization on recommendation, search, and personalization. Fernando’s work has received awards at SIGIR, WSDM, ISCRAM, and ECIR. He is the recipient of the 2017 British Computer Society Karen Spärck Jones Award. Fernando has co-organized workshops and tutorials at SIGIR, WSDM, and WWW. He has also co-organized several NIST TREC initiatives, WSDM (2013), Strategic Workshop on Information Retrieval (2018), FAT* (2019), SIGIR (2021), and the CIFAR Workshop on Artificial Intelligence and the Curation of Culture (2019)
More from the Same Authors
-
2021 : Artsheets for Art Datasets »
Ramya Srinivasan · Emily Denton · Jordan Famularo · Negar Rostamzadeh · Fernando Diaz · Beth Coleman -
2022 : Exposure Fairness in Music Recommendation »
Rebecca Salganik · Fernando Diaz · Golnoosh Farnadi -
2022 Workshop: Cultures of AI and AI for Culture »
Alex Hanna · Rida Qadri · Fernando Diaz · Nick Seaver · Morgan Scheuerman -
2022 : Panel »
Hannah Korevaar · Manish Raghavan · Ashudeep Singh · Fernando Diaz · Chloé Bakalar · Alana Shine -
2022 Poster: Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation »
Yao Qin · Chiyuan Zhang · Ting Chen · Balaji Lakshminarayanan · Alex Beutel · Xuezhi Wang -
2020 Workshop: Algorithmic Fairness through the Lens of Causality and Interpretability »
Awa Dieng · Jessica Schrouff · Matt Kusner · Golnoosh Farnadi · Fernando Diaz -
2020 Tutorial: (Track2) Beyond Accuracy: Grounding Evaluation Metrics for Human-Machine Learning Systems Q&A »
Praveen Chandar · Fernando Diaz · Brian St. Thomas -
2020 Tutorial: (Track2) Beyond Accuracy: Grounding Evaluation Metrics for Human-Machine Learning Systems »
Praveen Chandar · Fernando Diaz · Brian St. Thomas -
2019 : Balancing Competing Objectives for Welfare-Aware Machine Learning with Imperfect Data »
Esther Rolf -
2016 Demonstration: Project Malmo - Minecraft for AI Research »
Katja Hofmann · Matthew A Johnson · Fernando Diaz · Alekh Agarwal · Tim Hutton · David Bignell · Evelyne Viegas