Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Algorithmic Fairness through the Lens of Causality and Privacy

Perception as a Fairness Parameter

Jose Alvarez · Mayra Russo


Abstract:

Perception refers to the process by which two or more agents make sense of the same information differently. Cognitive psychologists have long studied it, and more recently, the artificial intelligence (AI) community has also studied it due to its role in biased decision-making. Largely unexplored within the Fair AI literature, in this work we consider perception as a parameter of interest for tackling fairness problems and present the fair causal perception (FCP) framework. FCP allows an algorithmic decision-maker h to elicit group-specific representations, or perceptions, centered on a discrete protected attribute A to improve the information set X used to calculate the decision outcome h(X) = Y. This framework combines ontologies and structural causal models, resulting in a perspective-based causal model. Under FCP, the algorithmic decision-maker h can choose to enhance X depending on its fairness goals by re-interpreting it under A-specific perceptions, which means that the same individual instance can be classified differently depending on the evoked representation. We showcase the framework with an example based on a college admissions problem using synthetic data where, in the case of a tie between similar candidates with different values for socioeconomic background A, h non-randomly breaks the tie in favor of the under-privileged candidate. Using FCP, we describe what it means to be an applicant from the under-privileged group and how it causally affects the observed X in this college admissions context; and, in turn, we also describe local penalties to be introduced by h when classifying these applicants. Benchmarking individual fairness metrics, we compare how h derives fairer outcomes under FCP.

Chat is not available.