Timezone: »
Fairness is a fundamental requirement for trustworthy and human-centered Artificial Intelligence (AI) system. However, deep neural networks (DNNs) tend to make unfair predictions when the training data are collected from different sub-populations with different attributes (i.e. color, sex, age), leading to biased DNN predictions. We notice that such a troubling phenomenon is often caused by data itself, which means that bias information is encoded to the DNN along with the useful information (i.e. class information, semantic information). Therefore, we propose to use sketching to handle this phenomenon. Without losing the utility of data, we explore the image-to-sketching methods that can maintain the useful semantic information for the targeted classification while filtering out the useless bias information. In addition, we design a fair loss for further improving the model fairness. We evaluate our method on extensive experiments on both general scene dataset and medical scene dataset. Our results show that the desired image-to-sketching method improves model fairness and achieves satisfactory results among state-of-the-art (SOTA). Our code would be released based on acceptance.
Author Information
Ruichen Yao (University of British Columbia)
cui ziteng
Xiaoxiao Li (UBC)
Lin Gu (RIKEN)
More from the Same Authors
-
2021 Spotlight: Subgraph Federated Learning with Missing Neighbor Generation »
Ke ZHANG · Carl Yang · Xiaoxiao Li · Lichao Sun · Siu Ming Yiu -
2022 Workshop: Medical Imaging meets NeurIPS »
DOU QI · Konstantinos Kamnitsas · Yuankai Huo · Xiaoxiao Li · Daniel Moyer · Danielle Pace · Jonas Teuwen · Islem Rekik -
2021 Workshop: Medical Imaging meets NeurIPS »
DOU QI · Marleen de Bruijne · Ben Glocker · Aasa Feragen · Herve Lombaert · Ipek Oguz · Jonas Teuwen · Islem Rekik · Darko Stern · Xiaoxiao Li -
2021 Poster: Subgraph Federated Learning with Missing Neighbor Generation »
Ke ZHANG · Carl Yang · Xiaoxiao Li · Lichao Sun · Siu Ming Yiu