Program Highlights »
Workshop
Fri Dec 8th 08:00 AM -- 06:30 PM @ 101 A
Learning in the Presence of Strategic Behavior
Nika Haghtalab · Yishay Mansour · Tim Roughgarden · Vasilis Syrgkanis · Jennifer Wortman Vaughan





Workshop Home Page

Machine learning is primarily concerned with the design and analysis of algorithms that learn about an entity. Increasingly more, machine learning is being used to design policies that affect the entity it once learned about. This can cause the entity to react and present a different behavior. Ignoring such interactions could lead to solutions that are ultimately ineffective in practice. For example, to design an effective ad display one has to take into account how a viewer would react to the displayed advertisements, for example by choosing to scroll through or click on them. Additionally, in many environments, multiple learners learn concurrently about one or more related entities. This can bring about a range of interactions between individual learners. For example, multiple firms may compete or collaborate on performing market research. How do the learners and entities interact? How do these interactions change the task at hand? What are some desirable interactions in a learning environment? And what are the mechanisms for bringing about such desirable interactions? These are some of the questions we would like to explore more in this workshop.

Traditionally, learning theory has adopted two extreme views in this respect: First, when learning occurs in isolation from strategic behavior, such as in the classical PAC setting where the data is drawn from a fixed distribution; second, when the learner faces an adversary whose goal is to inhibit the learning process, such as the minimax setting where the data is generated by an adaptive worst-case adversary. While these extreme perspectives have lead to elegant results and concepts, such as VC dimension, Littlestone dimension, regret bounds, and more, many types of problems that we would like to solve involve strategic behaviors that do not fall into these two extremes. Examples of these problems include but are not limited to

1. Learning from data that is produced by agents who have vested interest in the outcome or the learning process. For example, learning a measure of quality of universities by surveying members of the academia who stand to gain or lose from the outcome, or when a GPS routing app has to learn patterns of traffic delay by routing individuals who have no interest in taking slower routes.

2. Learning a model for the strategic behavior of one or more agents by observing their interactions, for example, learning economical demands of buyers by observing their bidding patterns when competing with other buyers.

3. Learning as a model of interactions between agents. Examples of this include applications to swarm robotics, where individual agents have to learn to interact in a multi-agent setting in order to achieve individual or collective goals.

4. Interactions between multiple learners. In many settings, two or more learners learn about the same or multiple related concepts. How do these learners interact? What are the scenarios under which they would share knowledge, information, or data. What are the desirable interactions between learners? As an example, consider multiple competing pharmaceutical firms that are learning about the effectiveness of a certain treatment. In this case, while competing firms would prefer not to share their findings, it is beneficial to the society when such findings are shared. How can we incentivize these learners to perform such desirable interactions?

The main goal of this workshop is to address current challenges and opportunities that arise from the presence of strategic behavior in learning theory. This workshop aims at bringing together members of different communities, including machine learning, economics, theoretical computer science, and social computing, to share recent results, discuss important directions for future research, and foster collaborations.

09:00 AM (Invited Talk) Yiling Chen: Learning in Strategic Data Environments. (Invited Talk)
Yiling Chen
09:45 AM Strategic Classification from Revealed Preferences (Talk)
Jinshuo Dong
10:00 AM Learning in Repeated Auctions with Budgets: Regret Minimization and Equilibrium (Talk)
Yonatan Gur
10:15 AM Spotlights (Talks)
Chara Podimata, Song Zuo, Zhe Feng, Anthony Kim
11:00 AM (Invited Talk) Eva Tardos: Online learning with partial information for players in games. (Invited Talk)
Eva Tardos
11:45 AM (Invited Talk) Mehryar Mohri: Regret minimization against strategic buyers. (Invited Talk)
Mehryar Mohri
12:30 PM Lunch Break (Break)
01:50 PM (Invited Talk) Percy Liang: Learning with Adversaries and Collaborators (Invited Talk)
Percy Liang
02:35 PM Spotlights (Talks)
Antti Kangasrääsiö, Richard Everett, Yitao Liang, Yang Cai, Steven Wu, Vidya Muthukumar, Sven Schmit
03:00 PM Poster Session & Coffee break (Break)
03:30 PM (Invited Talk) Alex Peysakhovich: Towards cooperative AI (Invited Talk)
Alexander Peysakhovich
04:15 PM Statistical Tests of Incentive Compatibility in Display Ad Auctions (Talk)
Andres Munoz
04:30 PM Optimal Economic Design through Deep Learning (Talk)
David Parkes
04:45 PM Learning Against Non-Stationary Agents with Opponent Modeling & Deep Reinforcement Learning (Talk)
Richard Everett