Timezone: »
Recent techniques for approximating Nash equilibria in very large games leverage neural networks to learn approximately optimal policies (strategies). One promis- ing line of research uses neural networks to approximate counterfactual regret minimization (CFR) or its modern variants. DREAM, the only current CFR-based neural method that is model free and therefore scalable to very large games, trains a neural network on an estimated regret target that can have extremely high variance due to an importance sampling term inherited from Monte Carlo CFR (MCCFR). In this paper we propose an unbiased model-free method that does not require any importance sampling. Our method, ESCHER, is principled and is guaranteed to converge to an approximate Nash equilibrium with high probability. We show that the variance of the estimated regret of ESCHER is orders of magnitude lower than DREAM and other baselines. We then show that ESCHER outperforms the prior state of the art—DREAM and neural fictitious self play (NFSP)—on a number of games and the difference becomes dramatic as game size increases. In the very large game of dark chess, ESCHER is able to beat DREAM and NFSP in a head-to-head competition over 90% of the time.
Author Information
Stephen McAleer (CMU)
Gabriele Farina (Meta AI; Carnegie Mellon University)
Marc Lanctot (DeepMind)
Tuomas Sandholm (CMU, Strategic Machine, Strategy Robot, Optimized Markets)
More from the Same Authors
-
2021 Spotlight: Subgame solving without common knowledge »
Brian Zhang · Tuomas Sandholm -
2021 Spotlight: Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2021 : Temporal-Difference Value Estimation via Uncertainty-Guided Soft Updates »
Litian Liang · Yaosheng Xu · Stephen McAleer · Dailin Hu · Alexander Ihler · Pieter Abbeel · Roy Fox -
2021 : Target Entropy Annealing for Discrete Soft Actor-Critic »
Yaosheng Xu · Dailin Hu · Litian Liang · Stephen McAleer · Pieter Abbeel · Roy Fox -
2021 : Deep learning reconstruction of the neutrino energy with a shallow Askaryan detector »
Stephen McAleer · Christian Glaser · Pierre Baldi -
2022 Poster: Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning »
Yuanpei Chen · Tianhao Wu · Shengjie Wang · Xidong Feng · Jiechuan Jiang · Zongqing Lu · Stephen McAleer · Hao Dong · Song-Chun Zhu · Yaodong Yang -
2022 : Clairvoyant Regret Minimization: Equivalence with Nemirovski’s Conceptual Prox Method and Extension to General Convex Games »
Gabriele Farina · Christian Kroer · Chung-Wei Lee · Haipeng Luo -
2022 : Feasible Adversarial Robust Reinforcement Learning for Underspecified Environments »
JB Lanier · Stephen McAleer · Pierre Baldi · Roy Fox -
2022 : A Unified Approach to Reinforcement Learning, Quantal Response Equilibria, and Two-Player Zero-Sum Games »
Samuel Sokota · Ryan D'Orazio · J. Zico Kolter · Nicolas Loizou · Marc Lanctot · Ioannis Mitliagkas · Noam Brown · Christian Kroer -
2022 : Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning »
Anton Bakhtin · David Wu · Adam Lerer · Jonathan Gray · Athul Jacob · Gabriele Farina · Alexander Miller · Noam Brown -
2022 Spotlight: Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning »
Yuanpei Chen · Tianhao Wu · Shengjie Wang · Xidong Feng · Jiechuan Jiang · Zongqing Lu · Stephen McAleer · Hao Dong · Song-Chun Zhu · Yaodong Yang -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 Poster: Structural Analysis of Branch-and-Cut and the Learnability of Gomory Mixed Integer Cuts »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2022 Poster: Uncoupled Learning Dynamics with $O(\log T)$ Swap Regret in Multiplayer Games »
Ioannis Anagnostides · Gabriele Farina · Christian Kroer · Chung-Wei Lee · Haipeng Luo · Tuomas Sandholm -
2022 Poster: Maximizing Revenue under Market Shrinkage and Market Uncertainty »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm -
2022 Poster: Polynomial-Time Optimal Equilibria with a Mediator in Extensive-Form Games »
Brian Zhang · Tuomas Sandholm -
2022 Poster: Optimistic Mirror Descent Either Converges to Nash or to Strong Coarse Correlated Equilibria in Bimatrix Games »
Ioannis Anagnostides · Gabriele Farina · Ioannis Panageas · Tuomas Sandholm -
2022 Poster: Subgame Solving in Adversarial Team Games »
Brian Zhang · Luca Carminati · Federico Cacciamani · Gabriele Farina · Pierriccardo Olivieri · Nicola Gatti · Tuomas Sandholm -
2022 Poster: Near-Optimal No-Regret Learning Dynamics for General Convex Games »
Gabriele Farina · Ioannis Anagnostides · Haipeng Luo · Chung-Wei Lee · Christian Kroer · Tuomas Sandholm -
2022 Expo Demonstration: Human Modeling and Strategic Reasoning in the Game of Diplomacy »
Noam Brown · Alexander Miller · Gabriele Farina -
2021 Poster: Subgame solving without common knowledge »
Brian Zhang · Tuomas Sandholm -
2021 Poster: Equilibrium Refinement for the Age of Machines: The One-Sided Quasi-Perfect Equilibrium »
Gabriele Farina · Tuomas Sandholm -
2021 Poster: Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2020 Poster: Small Nash Equilibrium Certificates in Very Large Games »
Brian Zhang · Tuomas Sandholm -
2020 Poster: Polynomial-Time Computation of Optimal Correlated Equilibria in Two-Player Extensive-Form Games with Public Chance Moves and Beyond »
Gabriele Farina · Tuomas Sandholm -
2020 Poster: No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium »
Andrea Celli · Alberto Marchesi · Gabriele Farina · Nicola Gatti -
2020 Oral: No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium »
Andrea Celli · Alberto Marchesi · Gabriele Farina · Nicola Gatti -
2020 Poster: Improving Policy-Constrained Kidney Exchange via Pre-Screening »
Duncan McElfresh · Michael Curry · Tuomas Sandholm · John Dickerson -
2020 Poster: Pipeline PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games »
Stephen McAleer · JB Lanier · Roy Fox · Pierre Baldi -
2019 Poster: Correlation in Extensive-Form Games: Saddle-Point Formulation and Benchmarks »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Poster: Efficient Regret Minimization Algorithm for Extensive-Form Correlated Equilibrium »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Spotlight: Efficient Regret Minimization Algorithm for Extensive-Form Correlated Equilibrium »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Poster: Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2018 : Regret Decomposition in Sequential Games with Convex Action Spaces and Losses »
Gabriele Farina -
2018 Poster: A Unified Framework for Extensive-Form Game Abstraction with Bounds »
Christian Kroer · Tuomas Sandholm -
2018 Poster: Depth-Limited Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm · Brandon Amos -
2018 Poster: Solving Large Sequential Games with the Excessive Gap Technique »
Christian Kroer · Gabriele Farina · Tuomas Sandholm -
2018 Poster: Practical exact algorithm for trembling-hand equilibrium refinements in games »
Gabriele Farina · Nicola Gatti · Tuomas Sandholm -
2018 Spotlight: Solving Large Sequential Games with the Excessive Gap Technique »
Christian Kroer · Gabriele Farina · Tuomas Sandholm -
2018 Poster: Ex ante coordination and collusion in zero-sum multi-player extensive-form games »
Gabriele Farina · Andrea Celli · Nicola Gatti · Tuomas Sandholm -
2017 Poster: A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning »
Marc Lanctot · Vinicius Zambaldi · Audrunas Gruslys · Angeliki Lazaridou · Karl Tuyls · Julien Perolat · David Silver · Thore Graepel -
2017 Demonstration: Libratus: Beating Top Humans in No-Limit Poker »
Noam Brown · Tuomas Sandholm -
2017 Poster: Safe and Nested Subgame Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm -
2017 Oral: Safe and Nested Subgame Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm -
2016 Poster: Sample Complexity of Automated Mechanism Design »
Maria-Florina Balcan · Tuomas Sandholm · Ellen Vitercik -
2015 Poster: Regret-Based Pruning in Extensive-Form Games »
Noam Brown · Tuomas Sandholm -
2015 Demonstration: Claudico: The World's Strongest No-Limit Texas Hold'em Poker AI »
Noam Brown · Tuomas Sandholm -
2014 Poster: Diverse Randomized Agents Vote to Win »
Albert Jiang · Leandro Soriano Marcolino · Ariel Procaccia · Tuomas Sandholm · Nisarg Shah · Milind Tambe