Skip to yearly menu bar Skip to main content


Workshop

Algorithmic Fairness through the Lens of Causality and Privacy

Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff

Room 392

As machine learning models permeate every aspect of decision making systems in consequential areas such as healthcare and criminal justice, it has become critical for these models to satisfy trustworthiness desiderata such as fairness, interpretability, accountability, privacy and security. Initially studied in isolation, recent work has emerged at the intersection of these different fields of research, leading to interesting questions on how fairness can be achieved using a causal perspective and under privacy concerns.

Indeed, the field of causal fairness has seen a large expansion in recent years notably as a way to counteract the limitations of initial statistical definitions of fairness. While a causal framing provides flexibility in modelling and mitigating sources of bias using a causal model, proposed approaches rely heavily on assumptions about the data generating process, i.e., the faithfulness and ignorability assumptions. This leads to open discussions on (1) how to fully characterize causal definitions of fairness, (2) how, if possible, to improve the applicability of such definitions, and (3) what constitutes a suitable causal framing of bias from a sociotechnical perspective?

Additionally, while most existing work on causal fairness assumes observed sensitive attribute data, such information is likely to be unavailable due to, for example, data privacy laws or ethical considerations. This observation has motivated initial work on training and evaluating fair algorithms without access to sensitive information and studying the compatibility and trade-offs between fairness and privacy. However, such work has been limited, for the most part, to statistical definitions of fairness raising the question of whether these methods can be extended to causal definitions.

Given the interesting questions that emerge at the intersection of these different fields, this workshop aims to deeply investigate how these different topics relate, but also how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness.

Chat is not available.
Timezone: America/Los_Angeles

Schedule