Timezone: »
Aerial images of neighborhoods in South Africa show the clear legacy of Apartheid, a former policy of political and economic discrimination against non-European groups, with completely segregated neighborhoods of townships next to gated wealthy areas. This paper introduces the first publicly available dataset to study the evolution of spatial apartheid, using 6,768 high resolution satellite images of 9 provinces in South Africa. Our dataset was created using polygons demarcating land use, geographically labelled coordinates of buildings in South Africa, and high resolution satellite imagery covering the country from 2006-2017. We describe our iterative process to create this dataset, which includes pixel wise labels for 4 classes of neighborhoods: wealthy areas, non wealthy areas, non residential neighborhoods and vacant land. While datasets 7 times smaller than ours have cost over 1M to annotate, our dataset was created with highly constrained resources. We finally show examples of applications examining the evolution of neighborhoods in South Africa using our dataset.
Author Information
Raesetje Sefala (Distributed AI Research(DAIR) institude)
Raesetje is an AI Research Fellow who uses Computer Vision, Data Science and general Machine Learning techniques to mainly explore research questions with a societal impact. Her research focuses on creating ground truth datasets and using machine learning and other computational social science techniques to study spatial segregation in South Africa, post-Apartheid. Raesetje is a qualified Data Scientist and holds a Computer Science Masters degree from the University of the Witwatersrand with a special focus on Machine learning. She has been technically involved in different complex Data Science projects from around the world that involved building innovative solutions. She is mainly interested in using AI to solve problems experienced in developing countries; creating and analysing datasets for machine learning designing and developing efficient data science/machine learning pipelines for different types of datasets, and contributing to making better data and technology policies that serve communities.
Timnit Gebru (Black in AI)
Timnit Gebru was recently fired by Google for raising issues of discrimination in the workplace. Prior to that she was a co-lead of the Ethical AI research team at Google Brain. She received her PhD from the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li, and did a postdoc at Microsoft Research, New York City in the FATE (Fairness Accountability Transparency and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying projects aiming to gain insights from data. Timnit also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI.
Luzango Mfupe
Nyalleng Moorosi (Google Ghana)
Richard Klein (University of the Witwatersrand)
More from the Same Authors
-
2022 : Ethics Roundtable »
Negar Rostamzadeh · Sina Fazelpour · Nyalleng Moorosi -
2022 Poster: Fair Wrapping for Black-box Predictions »
Alexander Soen · Ibrahim Alabdulmohsin · Sanmi Koyejo · Yishay Mansour · Nyalleng Moorosi · Richard Nock · Ke Sun · Lexing Xie -
2022 : Invited talk (Raesetje Sefala) - "Constructing visual datasets to answer research questions" »
Raesetje Sefala -
2021 : Invited Talk 3 »
Nyalleng Moorosi · Razvan Amironesei -
2021 : Case Study »
Timnit Gebru · Emily Denton -
2021 Tutorial: Beyond Fairness in Machine Learning »
Timnit Gebru · Emily Denton -
2021 : Machine learning in practice: Who is benefiting? Who is being harmed? »
Timnit Gebru -
2020 : Strategies for anticipating and mitigating risks »
Ashley Casovan · Timnit Gebru · Shakir Mohamed · Solon Barocas · Aviv Ovadya -
2020 : Harms from AI research »
Anna Lauren Hoffmann · Nyalleng Moorosi · Vinay Prabhu · Deborah Raji · Jacob Metcalf · Sherry Stanley -
2020 : Panel 1: Tensions & Cultivating Resistance AI »
Abeba Birhane · Timnit Gebru · Noopur Raval · Ramon Vilarino -
2018 : Bias and fairness in AI »
Timnit Gebru · Margaret Mitchell · Brittny-Jade E Saunders