Timezone: »
When researchers and practitioners, as well as policy makers and the public, discuss the impacts of deep learning systems, they draw upon multiple conceptual frames that do not sit easily beside each other. Questions of algorithmic fairness arise from a set of concerns that are similar, but not identical, to those that circulate around AI safety, which in turn overlap with, but are distinct from, the questions that motivate work on AI ethics, and so on. Robust bodies of research on privacy, security, transparency, accountability, interpretability, explainability, and opacity are also incorporated into each of these frames and conversations in variable ways. These frames reveal gaps that persist across both highly technical and socially embedded approaches, and yet collaboration across these gaps has proven challenging.
Fairness, Ethics, and Safety in AI each draw upon different disciplinary prerogatives, variously centering applied mathematics, analytic philosophy, behavioral sciences, legal studies, and the social sciences in ways that make conversation between these frames fraught with misunderstandings. These misunderstandings arise from a high degree of linguistic slippage between different frames, and reveal the epistemic fractures that undermine valuable synergy and productive collaboration. This workshop focuses on ways to translate between these ongoing efforts and bring them into necessary conversation in order to understand the profound impacts of algorithmic systems in society.
Fri 8:00 a.m. - 8:15 a.m.
|
Opening Remarks
(
Talk
)
|
Jack Poulson · Manfred K. Warmuth 🔗 |
Fri 8:15 a.m. - 8:45 a.m.
|
Invited Talk
(
Talk
)
|
Yoshua Bengio 🔗 |
Fri 8:45 a.m. - 9:45 a.m.
|
Approaches to Understanding AI
(
Discussion Panel
)
The stakes of AI certainly alter how we relate to each other as humans - how we know what we know about reality, how we communicate, how we work and earn money, and about how we think of ourselves as human. But in grappling with these changing relations, three fairly concrete approaches have dominated the conversation: ethics, fairness, and safety. These approaches come from very different academic backgrounds, draw attention to very different aspects of AI, and imagine very different problems and solutions as relevant, leading us to ask: • What are the commonalities and differences between ethics, fairness, and safety as approaches to addressing the challenges of AI? • How do these approaches imagine different problems and solutions for the challenges posed by AI? • How can these approaches work together, or are there some areas where they are mutually incompatible? |
Yoshua Bengio · Roel Dobbe · Madeleine Elish · Joshua Kroll · Jacob Metcalf · Jack Poulson 🔗 |
Fri 9:45 a.m. - 10:00 a.m.
|
Spectrogram
(
Activity
)
|
Emanuel Moss 🔗 |
Fri 10:00 a.m. - 10:30 a.m.
|
Coffee Break
|
🔗 |
Fri 10:30 a.m. - 11:30 a.m.
|
Detecting and Documenting AI Impacts
(
Discussion Panel
)
Algorithmic systems are being widely used in key social institutions and while they promise radical improvements in fields from public health to energy allocation, they also raises troubling issues of bias, discrimination, and “automated inequality.” They also present irresolvable challenges related to the dual-use nature of these technologies, secondary effects that are difficult to anticipate, and alter power relations between individuals, companies, and governments. • How should we delimit the scope of AI impacts? What can properly be considered an AI impact, as opposed to an impact arising from some other cause? • How do we detect and document the social impacts of AI? • What tools, processes, and institutions ought to be involved in addressing these questions? |
Fitzroy Christian · Alexa Hagerty · Fabian Rogers · Friederike Schuur · Jacob Snow · Madeleine Elish 🔗 |
Fri 11:30 a.m. - 12:30 p.m.
|
Responsibilities
(
Discussion Panel
)
While there is a great deal of AI research happening in academic settings, much of that work is operationalized within corporate contexts. Some companies serve as vendors, selling AI systems to government entities, some sell to other companies, some sell directly to end-users, and yet others sell to any combination of the above. • What set of responsibilities does the AI industry have w.r.t. AI impacts? • How do those responsibilities shift depending on a B2B, B2G, B2C business model? • What responsibilities does government have to society, with respect to AI impacts arising from industry? • What role does civil society organizations have to play in this conversation? |
Been Kim · Liz O'Sullivan · Friederike Schuur · Andrew Smart · Jacob Metcalf 🔗 |
Fri 12:30 p.m. - 2:00 p.m.
|
Lunch
(
Lunch Break
)
|
🔗 |
Fri 2:00 p.m. - 2:45 p.m.
|
A Conversation with Meredith Whittaker
(
Interview
)
|
Mona Sloane · Meredith Whittaker 🔗 |
Fri 2:45 p.m. - 3:45 p.m.
|
Global implications
(
Discussion Panel
)
The risks and benefits of AI are unevenly distributed within societies and across the globe. Governance regimes are drastically different in various regions of the world, as are the political and ethical implications of AI technologies. • How do we better understand how AI technologies operate around the world and the range of risks they carry for different societies? • Are there global claims about the implications of AI that can apply everywhere around the globe? If so, what are they? • What can we learn from AI’s impacts on labor, environment, public health and agriculture in diverse settings? |
Eirini Malliaraki · Jack Poulson · Vinodkumar Prabhakaran · Mona Sloane · Alexa Hagerty 🔗 |
Fri 3:45 p.m. - 4:30 p.m.
|
Coffee Break
|
🔗 |
Fri 4:30 p.m. - 5:45 p.m.
|
Solutions
(
Discussion Panel
)
With the recognition that there are no fully sufficient steps that can be taken to addressing all AI impacts, there are concrete things that ought to be done, ranging across technical, socio-technical, and legal or regulatory possibilities. • What are the technical, social, and/or regulatory solutions that are necessary to address the riskiest aspects of AI? • What are key approaches to minimize the risks of AI technologies? |
Fitzroy Christian · Lily Hu · Risi Kondor · Brandeis Marshall · Fabian Rogers · Friederike Schuur · Emanuel Moss 🔗 |
Author Information
Igor Rubinov (Dovetail Labs)
Risi Kondor (U. Chicago)
Risi Kondor joined the Flatiron Institute in 2019 as a Senior Research Scientist with the Center for Computational Mathematics. Previously, Kondor was an Associate Professor in the Department of Computer Science, Statistics, and the Computational and Applied Mathematics Initiative at the University of Chicago. His research interests include computational harmonic analysis and machine learning. Kondor holds a Ph.D. in Computer Science from Columbia University, an MS in Knowledge Discovery and Data Mining from Carnegie Mellon University, and a BA in Mathematics from the University of Cambridge. He also holds a diploma in Computational Fluid Dynamics from the Von Karman Institute for Fluid Dynamics and a diploma in Physics from Eötvös Loránd University in Budapest.
Jack Poulson (Tech Inquiry)
Manfred K. Warmuth (Google Brain)
Emanuel Moss (CUNY Graduate Center | Data & Society)
Alexa Hagerty (University of Cambridge; Dovetail Labs)
More from the Same Authors
-
2021 : ATOM3D: Tasks on Molecules in Three Dimensions »
Raphael Townshend · Martin Vögele · Patricia Suriana · Alex Derry · Alexander Powers · Yianni Laloudakis · Sidhika Balachandar · Bowen Jing · Brandon Anderson · Stephan Eismann · Risi Kondor · Russ Altman · Ron Dror -
2022 : Multiresolution Mesh Networks For Learning Dynamical Fluid Simulations »
Bach Nguyen · Truong Son Hy · Long Tran-Thanh · Risi Kondor -
2022 : Predicting Drug-Drug Interactions using Deep Generative Models on Graphs »
Khang Ngo · Truong Son Hy · Risi Kondor -
2021 Poster: Autobahn: Automorphism-based Graph Neural Nets »
Erik Thiede · Wenda Zhou · Risi Kondor -
2021 : ATOM3D: Tasks on Molecules in Three Dimensions »
Raphael Townshend · Martin Vögele · Patricia Suriana · Alex Derry · Alexander Powers · Yianni Laloudakis · Sidhika Balachandar · Bowen Jing · Brandon Anderson · Stephan Eismann · Risi Kondor · Russ Altman · Ron Dror -
2020 Poster: Reparameterizing Mirror Descent as Gradient Descent »
Ehsan Amid · Manfred K. Warmuth -
2020 Tutorial: (Track2) Equivariant Networks Q&A »
Risi Kondor · Taco Cohen -
2020 Tutorial: (Track2) Equivariant Networks »
Risi Kondor · Taco Cohen -
2019 : Solutions »
Fitzroy Christian · Lily Hu · Risi Kondor · Brandeis Marshall · Fabian Rogers · Friederike Schuur · Emanuel Moss -
2019 : Global implications »
Eirini Malliaraki · Jack Poulson · Vinodkumar Prabhakaran · Mona Sloane · Alexa Hagerty -
2019 : Detecting and Documenting AI Impacts »
Fitzroy Christian · Alexa Hagerty · Fabian Rogers · Friederike Schuur · Jacob Snow · Madeleine Elish -
2019 : Spectrogram »
Emanuel Moss -
2019 : Approaches to Understanding AI »
Yoshua Bengio · Roel Dobbe · Madeleine Elish · Joshua Kroll · Jacob Metcalf · Jack Poulson -
2019 : Opening Remarks »
Jack Poulson · Manfred K. Warmuth -
2019 Poster: Cormorant: Covariant Molecular Neural Networks »
Brandon Anderson · Truong Son Hy · Risi Kondor -
2019 Spotlight: Cormorant: Covariant Molecular Neural Networks »
Brandon Anderson · Truong Son Hy · Risi Kondor -
2019 Poster: Robust Bi-Tempered Logistic Loss Based on Bregman Divergences »
Ehsan Amid · Manfred K. Warmuth · Rohan Anil · Tomer Koren -
2018 Poster: Leveraged volume sampling for linear regression »
Michal Derezinski · Manfred K. Warmuth · Daniel Hsu -
2018 Spotlight: Leveraged volume sampling for linear regression »
Michal Derezinski · Manfred K. Warmuth · Daniel Hsu -
2018 Poster: Clebsch–Gordan Nets: a Fully Fourier Space Spherical Convolutional Neural Network »
Risi Kondor · Zhen Lin · Shubhendu Trivedi -
2017 Poster: Online Dynamic Programming »
Holakou Rahmanian · Manfred K. Warmuth -
2017 Poster: Unbiased estimates for linear regression via volume sampling »
Michal Derezinski · Manfred K. Warmuth -
2017 Spotlight: Unbiased estimates for linear regression via volume sampling »
Michal Derezinski · Manfred K. Warmuth -
2014 Poster: The limits of squared Euclidean distance regularization »
Michal Derezinski · Manfred K. Warmuth -
2014 Spotlight: The limits of squared Euclidean distance regularization »
Michal Derezinski · Manfred K. Warmuth -
2013 Workshop: Large Scale Matrix Analysis and Inference »
Reza Zadeh · Gunnar Carlsson · Michael Mahoney · Manfred K. Warmuth · Wouter M Koolen · Nati Srebro · Satyen Kale · Malik Magdon-Ismail · Ashish Goel · Matei A Zaharia · David Woodruff · Ioannis Koutis · Benjamin Recht -
2012 Poster: Putting Bayes to sleep »
Wouter M Koolen · Dmitri Adamskiy · Manfred K. Warmuth -
2012 Spotlight: Putting Bayes to sleep »
Wouter M Koolen · Dmitri Adamskiy · Manfred K. Warmuth -
2011 Poster: Learning Eigenvectors for Free »
Wouter M Koolen · Wojciech Kotlowski · Manfred K. Warmuth -
2010 Poster: Repeated Games against Budgeted Adversaries »
Jacob D Abernethy · Manfred K. Warmuth -
2007 Spotlight: Boosting Algorithms for Maximizing the Soft Margin »
Manfred K. Warmuth · Karen Glocer · Gunnar Rätsch -
2007 Poster: Boosting Algorithms for Maximizing the Soft Margin »
Manfred K. Warmuth · Karen Glocer · Gunnar Rätsch -
2006 Poster: Randomized PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension »
Manfred K. Warmuth · Dima Kuzmin