Timezone: »

 
Tutorial
Incentive-Aware Machine Learning: A Tale of Robustness, Fairness, Improvement, and Performativity
Chara Podimata

Mon Dec 05 11:00 AM -- 01:30 PM (PST) @ Virtual
Event URL: https://www.charapodimata.com/NeurIPS22-tutorial.html »

When an algorithm can make consequential decisions for people's lives, people have an incentive to respond to the algorithm strategically in order to obtain a more desirable decision. This means that unless the algorithm adapts to this strategizing, it may end up creating policy decisions that are incompatible with the original policy's goal. This has been the mantra of the rapidly growing research area of incentive-aware Machine Learning (ML). In this tutorial, we introduce this area to the broader ML community. After a primer on the basic background needed, we introduce the audience to the four perspectives that have been studied so far: the robustness perspective (where the decision-maker tries to create algorithms that are robust to strategizing), the fairness perspective (where we study the inequalities that arise or are reinforced as a result of strategizing), the improvement perspective (where the learner tries to incentivize effort exertion towards actually improving their points), and the performativity perspective (where the decision-maker wishes to achieve a notion of stability in these settings).

Author Information

Chara Podimata (Harvard University)

More from the Same Authors

  • 2021 : Information Discrepancy in Strategic Learning »
    Yahav Bechavod · Chara Podimata · Steven Wu · Juba Ziani
  • 2021 : Information Discrepancy in Strategic Learning »
    Yahav Bechavod · Chara Podimata · Steven Wu · Juba Ziani
  • 2022 : Strategy-Aware Contextual Bandits »
    Keegan Harris · Chara Podimata · Steven Wu
  • 2022 : Strategy-Aware Contextual Bandits »
    Keegan Harris · Chara Podimata · Steven Wu
  • 2022 : Strategy-Aware Contextual Bandits »
    Keegan Harris · Chara Podimata · Steven Wu
  • 2022 : Panel »
    Meena Jagadeesan · Avrim Blum · Jon Kleinberg · Celestine Mendler-Dünner · Jennifer Wortman Vaughan · Chara Podimata
  • 2022 : Q & A »
    Chara Podimata
  • 2022 : Tutorial part 1 »
    Chara Podimata
  • 2020 Poster: Learning Strategy-Aware Linear Classifiers »
    Yiling Chen · Yang Liu · Chara Podimata
  • 2019 : Break / Poster Session 1 »
    Antonia Marcu · Yao-Yuan Yang · Pascale Gourdeau · Chen Zhu · Thodoris Lykouris · Jianfeng Chi · Mark Kozdoba · Arjun Nitin Bhagoji · Xiaoxia Wu · Jay Nandy · Michael T Smith · Bingyang Wen · Yuege Xie · Konstantinos Pitas · Suprosanna Shit · Maksym Andriushchenko · Dingli Yu · Gaël Letarte · Misha Khodak · Hussein Mozannar · Chara Podimata · James Foulds · Yizhen Wang · Huishuai Zhang · Ondrej Kuzelka · Alexander Levine · Nan Lu · Zakaria Mhammedi · Paul Viallard · Diana Cai · Lovedeep Gondara · James Lucas · Yasaman Mahdaviyeh · Aristide Baratin · Rishi Bommasani · Alessandro Barp · Andrew Ilyas · Kaiwen Wu · Jens Behrmann · Omar Rivasplata · Amir Nazemi · Aditi Raghunathan · Will Stephenson · Sahil Singla · Akhil Gupta · YooJung Choi · Yannic Kilcher · Clare Lyle · Edoardo Manino · Andrew Bennett · Zhi Xu · Niladri Chatterji · Emre Barut · Flavien Prost · Rodrigo Toro Icarte · Arno Blaas · Chulhee Yun · Sahin Lale · YiDing Jiang · Tharun Kumar Reddy Medini · Ashkan Rezaei · Alexander Meinke · Stephen Mell · Gary Kazantsev · Shivam Garg · Aradhana Sinha · Vishnu Lokhande · Geovani Rizk · Han Zhao · Aditya Kumar Akash · Jikai Hou · Ali Ghodsi · Matthias Hein · Tyler Sypherd · Yichen Yang · Anastasia Pentina · Pierre Gillot · Antoine Ledent · Guy Gur-Ari · Noah MacAulay · Tianzong Zhang
  • 2017 : Spotlights »
    Chara Podimata · Song Zuo · Zhe Feng · Anthony Kim