Timezone: »
Motivated by mitigating potentially harmful impacts of technologies, the AI community has formulated and accepted mathematical definitions for certain pillars of accountability: e.g. privacy, fairness, and model transparency. Yet, we argue this is fundamentally misguided because these definitions are imperfect, siloed constructions of the human values they hope to proxy, while giving the guise that those values are sufficiently embedded in our technologies. Under popularized techniques, tensions arise when practitioners attempt to achieve each pillar of fairness, privacy, and transparency in isolation or simultaneously. In this position paper, we argue that the AI community needs to consider alternative formulations of these pillars based on the context in which technology is situated. By leaning on sociotechnical systems research, we can formulate more compatible, domain-specific definitions of our human values for building more ethical systems.
Author Information
Daniel Nissani (Arthur)
Teresa Datta (Arthur)
Teresa is a researcher at Arthur interested in transparency and social impact of algorithmic systems from a human-centered lens. She is interested in use-case evaluations of tools for AI transparency and context-based mechanisms for accountability. Previously, she worked on XAI and HCI projects while completing her M.S. in Data Science at Harvard University.
John Dickerson (Arthur AI & University of Maryland)
Max Cembalest (Arthur)
Akash Khanna (Arthur AI)
Haley Massa (Arthur AI)
More from the Same Authors
-
2021 : Learning Revenue-Maximizing Auctions With Differentiable Matching »
Michael Curry · Uro Lyi · Tom Goldstein · John P Dickerson -
2021 : Learning Revenue-Maximizing Auctions With Differentiable Matching »
Michael Curry · Uro Lyi · Tom Goldstein · John P Dickerson -
2021 : An mHealth Intervention for African American and Hispanic Adults: Preliminary Results from a One-Year Field Test »
Christine Herlihy · John Dickerson -
2021 : An mHealth Intervention for African American and Hispanic Adults: Preliminary Results from a One-Year Field Test »
Christine Herlihy · John Dickerson -
2022 : A Deep Dive into Dataset Imbalance and Bias in Face Identification »
Valeriia Cherepanova · Steven Reich · Samuel Dooley · Hossein Souri · John Dickerson · Micah Goldblum · Tom Goldstein -
2022 : Characterizing Anomalies with Explainable Classifiers »
Naveen Durvasula · Valentine d Hauteville · Keegan Hines · John Dickerson -
2022 : A Deep Dive into Dataset Imbalance and Bias in Face Identification »
Valeriia Cherepanova · Steven Reich · Samuel Dooley · Hossein Souri · John Dickerson · Micah Goldblum · Tom Goldstein -
2022 : On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition »
Samuel Dooley · Rhea Sukthanker · John Dickerson · Colin White · Frank Hutter · Micah Goldblum -
2022 : On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition »
Samuel Dooley · Rhea Sukthanker · John Dickerson · Colin White · Frank Hutter · Micah Goldblum -
2022 : A Deep Dive into Dataset Imbalance and Bias in Face Identification »
Valeriia Cherepanova · Steven Reich · Samuel Dooley · Hossein Souri · John Dickerson · Micah Goldblum · Tom Goldstein -
2022 Workshop: Graph Learning for Industrial Applications: Finance, Crime Detection, Medicine and Social Media »
Manuela Veloso · John Dickerson · Senthil Kumar · Eren K. · Jian Tang · Jie Chen · Peter Henstock · Susan Tibbs · Ani Calinescu · Naftali Cohen · C. Bayan Bruss · Armineh Nourbakhsh -
2022 : Tensions Between the Proxies of Human Values in AI »
Teresa Datta · Daniel Nissani · Max Cembalest · Akash Khanna · Haley Massa · John Dickerson -
2022 Social: Open Mic Night »
John Dickerson -
2022 Poster: Robustness Disparities in Face Detection »
Samuel Dooley · George Z Wei · Tom Goldstein · John Dickerson -
2022 Poster: On the Generalizability and Predictability of Recommender Systems »
Duncan McElfresh · Sujay Khandagale · Jonathan Valverde · John Dickerson · Colin White -
2021 Poster: VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization »
Mucong Ding · Kezhi Kong · Jingling Li · Chen Zhu · John Dickerson · Furong Huang · Tom Goldstein -
2021 Poster: Fair Clustering Under a Bounded Cost »
Seyed Esmaeili · Brian Brubach · Aravind Srinivasan · John Dickerson -
2021 Poster: PreferenceNet: Encoding Human Preferences in Auction Design with Deep Learning »
Neehar Peri · Michael Curry · Samuel Dooley · John Dickerson -
2021 Poster: How does a Neural Network's Architecture Impact its Robustness to Noisy Labels? »
Jingling Li · Mozhi Zhang · Keyulu Xu · John Dickerson · Jimmy Ba -
2020 Workshop: Workshop on Dataset Curation and Security »
Nathalie Baracaldo · Yonatan Bisk · Avrim Blum · Michael Curry · John Dickerson · Micah Goldblum · Tom Goldstein · Bo Li · Avi Schwarzschild -
2020 Poster: Detection as Regression: Certified Object Detection with Median Smoothing »
Ping-yeh Chiang · Michael Curry · Ahmed Abdelkader · Aounon Kumar · John Dickerson · Tom Goldstein -
2020 Poster: Certifying Strategyproof Auction Networks »
Michael Curry · Ping-yeh Chiang · Tom Goldstein · John Dickerson -
2020 Poster: Improving Policy-Constrained Kidney Exchange via Pre-Screening »
Duncan McElfresh · Michael Curry · Tuomas Sandholm · John Dickerson -
2020 Poster: Probabilistic Fair Clustering »
Seyed Esmaeili · Brian Brubach · Leonidas Tsepenekas · John Dickerson -
2019 Poster: Making the Cut: A Bandit-based Approach to Tiered Interviewing »
Candice Schumann · Zhi Lang · Jeffrey Foster · John Dickerson -
2019 Poster: Adversarial training for free! »
Ali Shafahi · Mahyar Najibi · Mohammad Amin Ghiasi · Zheng Xu · John Dickerson · Christoph Studer · Larry Davis · Gavin Taylor · Tom Goldstein -
2015 : Uncertainty in Dynamic Matching »
John P Dickerson