`

( events)   Timezone: »  
Workshop
Mon Dec 13 01:00 AM -- 12:30 PM (PST)
Algorithmic Fairness through the lens of Causality and Robustness
Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike





Workshop Home Page

Trustworthy machine learning (ML) encompasses multiple fields of research, including (but not limited to) robustness, algorithmic fairness, interpretability and privacy. Recently, relationships between techniques and metrics used across different fields of trustworthy ML have emerged, leading to interesting work at the intersection of algorithmic fairness, robustness, and causality.

On one hand, causality has been proposed as a powerful tool to address the limitations of initial statistical definitions of fairness. However, questions have emerged regarding the applicability of such approaches in practice and the suitability of a causal framing for studies of bias and discrimination. On the other hand, the Robustness literature has surfaced promising approaches to improve fairness in ML models. For instance, parallels can be shown between individual fairness and local robustness guarantees. In addition, the interactions between fairness and robustness can help us understand how fairness guarantees hold under distribution shift or adversarial/poisoning attacks.

After a first edition of this workshop that focused on causality and interpretability, we will turn to the intersectionality between algorithmic fairness and recent techniques in causality and robustness. In this context, we will investigate how these different topics relate, but also how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness. We are particularly interested in addressing open questions in the field, such as:
- How can causally grounded fairness methods help develop more robust and fair algorithms in practice?
- What is an appropriate causal framing in studies of discrimination?
- How do approaches for adversarial/poisoning attacks target algorithmic fairness?
- How do fairness guarantees hold under distribution shift?

The Many Roles that Causal Reasoning Plays in Reasoning about Fairness in Machine Learning (Oral)
On the Impossibility of Fairness-Aware Learning from Corrupted Data (Oral)
Fairness for Robust Learning to Rank (Poster)
Cooperative Multi-Agent Fairness and Equivariant Policies (Poster)
Fair SA: Sensitivity Analysis for Fairness in Face Recognition (Poster)
Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Networks (Poster)
Bounded Fairness Transferability subject to Distribution Shift (Poster)
Counterfactual Fairness in Mortgage Lending via Matching and Randomization (Poster)
Structural Interventions on Automated Decision Making Systems (Poster)
Balancing Robustness and Fairness via Partial Invariance (Poster)
Implications of Modeled Beliefs for Algorithmic Fairness in Machine Learning (Poster)
Fairness Degrading Adversarial Attacks Against Clustering Algorithms (Poster)
Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation (Oral)
Detecting Bias in the Presence of Spatial Autocorrelation (Oral)
Fair Clustering Using Antidote Data (Oral)