In many multi-agent settings, participants can form teams to achieve collective outcomes that may far surpass their individual capabilities. Measuring the relative contributions of agents and allocating them shares of the reward that promote long-lasting cooperation are difficult tasks. Cooperative game theory offers solution concepts identifying distribution schemes, such as the Shapley value, that fairly reflect the contribution of individuals to the performance of the team or the Core, which reduces the incentive of agents to abandon their team. Applications of such methods include identifying influential features and sharing the costs of joint ventures or team formation. Unfortunately, using these solutions requires tackling a computational barrier as they are hard to compute, even in restricted settings. In this work, we show how cooperative game-theoretic solutions can be distilled into a learned model by training neural networks to propose fair and stable payoff allocations. We show that our approach creates models that can generalize to games far from the training distribution and can predict solutions for more players than observed during training. An important application of our framework is Explainable AI: our approach can be used to speed-up Shapley value computations on many instances.
Daphne Cornelisse (Radboud University)
Daphne is a recent graduate of the AI program at Radboud University. She completed her Master's thesis in the Complex Learning lab, led by Prof Tal Kachman. Her research focuses on (interactive) learning in multiagent systems. Prior to this, Daphne completed a BSc. in Molecular & Computational neuroscience. In her free time, Daphne loves distilling research ideas into simple sketches.
Thomas Rood (Radboud University)
Yoram Bachrach (DeepMind)
Mateusz Malinowski (DeepMind)
Tal Kachman (Radboud University)
More from the Same Authors
2021 : Gotta Go Fast with Score-Based Generative Models »
Alexia Jolicoeur-Martineau · Ke Li · Rémi Piché-Taillefer · Tal Kachman · Ioannis Mitliagkas