Timezone: »

Robustness Disparities in Face Detection
Samuel Dooley · George Z Wei · Tom Goldstein · John Dickerson

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #1023

Facial analysis systems have been deployed by large companies and critiqued by scholars and activists for the past decade. Many existing algorithmic audits examine the performance of these systems on later stage elements of facial analysis systems like facial recognition and age, emotion, or perceived gender prediction; however, a core component to these systems has been vastly understudied from a fairness perspective: face detection, sometimes called face localization. Since face detection is a pre-requisite step in facial analysis systems, the bias we observe in face detection will flow downstream to the other components like facial recognition and emotion prediction. Additionally, no prior work has focused on the robustness of these systems under various perturbations and corruptions, which leaves open the question of how various people are impacted by these phenomena. We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models. We use both standard and recently released academic facial datasets to quantitatively analyze trends in face detection robustness. Across all the datasets and systems, we generally find that photos of individuals who are masculine presenting, older, of darker skin type, or have dim lighting are more susceptible to errors than their counterparts in other identities.

Author Information

Samuel Dooley (Department of Computer Science, University of Maryland, College Park)
George Z Wei (Meta)
Tom Goldstein (University of Maryland)
John Dickerson (Arthur AI & University of Maryland)

More from the Same Authors