Timezone: »
Recent techniques for approximating Nash equilibria in very large games leverage neural networks to learn approximately optimal policies (strategies). One promis- ing line of research uses neural networks to approximate counterfactual regret minimization (CFR) or its modern variants. DREAM, the only current CFR-based neural method that is model free and therefore scalable to very large games, trains a neural network on an estimated regret target that can have extremely high variance due to an importance sampling term inherited from Monte Carlo CFR (MCCFR). In this paper we propose an unbiased model-free method that does not require any importance sampling. Our method, ESCHER, is principled and is guaranteed to converge to an approximate Nash equilibrium with high probability. We show that the variance of the estimated regret of ESCHER is orders of magnitude lower than DREAM and other baselines. We then show that ESCHER outperforms the prior state of the art—DREAM and neural fictitious self play (NFSP)—on a number of games and the difference becomes dramatic as game size increases. In the very large game of dark chess, ESCHER is able to beat DREAM and NFSP in a head-to-head competition over 90% of the time.
Author Information
Stephen McAleer (CMU)
Gabriele Farina (Meta AI; Carnegie Mellon University)
Marc Lanctot (DeepMind)
Tuomas Sandholm (CMU, Strategic Machine, Strategy Robot, Optimized Markets)
More from the Same Authors
-
2021 Spotlight: Subgame solving without common knowledge »
Brian Zhang · Tuomas Sandholm -
2021 Spotlight: Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2021 : Temporal-Difference Value Estimation via Uncertainty-Guided Soft Updates »
Litian Liang · Yaosheng Xu · Stephen McAleer · Dailin Hu · Alexander Ihler · Pieter Abbeel · Roy Fox -
2021 : Target Entropy Annealing for Discrete Soft Actor-Critic »
Yaosheng Xu · Dailin Hu · Litian Liang · Stephen McAleer · Pieter Abbeel · Roy Fox -
2021 : Deep learning reconstruction of the neutrino energy with a shallow Askaryan detector »
Stephen McAleer · Christian Glaser · Pierre Baldi -
2022 Poster: Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning »
Yuanpei Chen · Tianhao Wu · Shengjie Wang · Xidong Feng · Jiechuan Jiang · Zongqing Lu · Stephen McAleer · Hao Dong · Song-Chun Zhu · Yaodong Yang -
2022 : Clairvoyant Regret Minimization: Equivalence with Nemirovski’s Conceptual Prox Method and Extension to General Convex Games »
Gabriele Farina · Christian Kroer · Chung-Wei Lee · Haipeng Luo -
2022 : Feasible Adversarial Robust Reinforcement Learning for Underspecified Environments »
JB Lanier · Stephen McAleer · Pierre Baldi · Roy Fox -
2022 : A Unified Approach to Reinforcement Learning, Quantal Response Equilibria, and Two-Player Zero-Sum Games »
Samuel Sokota · Ryan D'Orazio · J. Zico Kolter · Nicolas Loizou · Marc Lanctot · Ioannis Mitliagkas · Noam Brown · Christian Kroer -
2022 : Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning »
Anton Bakhtin · David Wu · Adam Lerer · Jonathan Gray · Athul Jacob · Gabriele Farina · Alexander Miller · Noam Brown -
2023 : Efficient Learning in Polyhedral Games via Best Response Oracles »
Darshan Chakrabarti · Gabriele Farina · Christian Kroer -
2023 : Llemma: An Open Language Model For Mathematics »
Zhangir Azerbayev · Hailey Schoelkopf · Keiran Paster · Marco Dos Santos · Stephen McAleer · Albert Q. Jiang · Jia Deng · Stella Biderman · Sean Welleck -
2023 : The Consensus Game: Language Model Generation via Equilibrium Search »
Athul Jacob · Yikang Shen · Gabriele Farina · Jacob Andreas -
2023 : Confronting Reward Model Overoptimization with Constrained RLHF »
Ted Moskovitz · Aaditya Singh · DJ Strouse · Tuomas Sandholm · Russ Salakhutdinov · Anca Dragan · Stephen McAleer -
2023 : Confronting Reward Model Overoptimization with Constrained RLHF »
Ted Moskovitz · Aaditya Singh · DJ Strouse · Tuomas Sandholm · Russ Salakhutdinov · Anca Dragan · Stephen McAleer -
2023 : The Consensus Game: Language Model Generation via Equilibrium Search »
Athul Jacob · Yikang Shen · Gabriele Farina · Jacob Andreas -
2023 Workshop: Foundation Models for Decision Making »
Sherry Yang · Ofir Nachum · Yilun Du · Stephen McAleer · Igor Mordatch · Linxi Fan · Jeannette Bohg · Dale Schuurmans -
2023 Poster: Computing Optimal Equilibria and Mechanisms via Learning in Zero-Sum Extensive-Form Games »
Brian Zhang · Gabriele Farina · Ioannis Anagnostides · Federico Cacciamani · Stephen McAleer · Andreas Haupt · Andrea Celli · Nicola Gatti · Vincent Conitzer · Tuomas Sandholm -
2023 Poster: Regret Matching+: (In)Stability and Fast Convergence in Games »
Gabriele Farina · Julien Grand-Clément · Christian Kroer · Chung-Wei Lee · Haipeng Luo -
2023 Poster: Team-PSRO for Learning Approximate TMECor in Large Team Games via Cooperative Reinforcement Learning »
Stephen McAleer · Gabriele Farina · Gaoyue Zhou · Mingzhi Wang · Yaodong Yang · Tuomas Sandholm -
2023 Poster: Polynomial-Time Linear-Swap Regret Minimization in Imperfect-Information Sequential Games »
Gabriele Farina · Charilaos Pipis -
2023 Poster: Language Models can Solve Computer Tasks »
Geunwoo Kim · Pierre Baldi · Stephen McAleer -
2023 Poster: Bicriteria Multidimensional Mechanism Design with Side Information »
Siddharth Prasad · Maria-Florina Balcan · Tuomas Sandholm -
2023 Poster: On the Convergence of No-Regret Learning Dynamics in Time-Varying Games »
Ioannis Anagnostides · Ioannis Panageas · Gabriele Farina · Tuomas Sandholm -
2023 Poster: On the Interplay between Social Welfare and Tractability of Equilibria »
Ioannis Anagnostides · Tuomas Sandholm -
2023 Poster: Policy Space Diversity for Non-Transitive Games »
Jian Yao · Weiming Liu · Haobo Fu · Yaodong Yang · Stephen McAleer · Qiang Fu · Wei Yang -
2022 Spotlight: Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning »
Yuanpei Chen · Tianhao Wu · Shengjie Wang · Xidong Feng · Jiechuan Jiang · Zongqing Lu · Stephen McAleer · Hao Dong · Song-Chun Zhu · Yaodong Yang -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 Poster: Structural Analysis of Branch-and-Cut and the Learnability of Gomory Mixed Integer Cuts »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2022 Poster: Uncoupled Learning Dynamics with $O(\log T)$ Swap Regret in Multiplayer Games »
Ioannis Anagnostides · Gabriele Farina · Christian Kroer · Chung-Wei Lee · Haipeng Luo · Tuomas Sandholm -
2022 Poster: Maximizing Revenue under Market Shrinkage and Market Uncertainty »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm -
2022 Poster: Polynomial-Time Optimal Equilibria with a Mediator in Extensive-Form Games »
Brian Zhang · Tuomas Sandholm -
2022 Poster: Optimistic Mirror Descent Either Converges to Nash or to Strong Coarse Correlated Equilibria in Bimatrix Games »
Ioannis Anagnostides · Gabriele Farina · Ioannis Panageas · Tuomas Sandholm -
2022 Poster: Subgame Solving in Adversarial Team Games »
Brian Zhang · Luca Carminati · Federico Cacciamani · Gabriele Farina · Pierriccardo Olivieri · Nicola Gatti · Tuomas Sandholm -
2022 Poster: Near-Optimal No-Regret Learning Dynamics for General Convex Games »
Gabriele Farina · Ioannis Anagnostides · Haipeng Luo · Chung-Wei Lee · Christian Kroer · Tuomas Sandholm -
2022 Expo Demonstration: Human Modeling and Strategic Reasoning in the Game of Diplomacy »
Noam Brown · Alexander Miller · Gabriele Farina -
2021 Poster: Subgame solving without common knowledge »
Brian Zhang · Tuomas Sandholm -
2021 Poster: Equilibrium Refinement for the Age of Machines: The One-Sided Quasi-Perfect Equilibrium »
Gabriele Farina · Tuomas Sandholm -
2021 Poster: Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2020 Poster: Small Nash Equilibrium Certificates in Very Large Games »
Brian Zhang · Tuomas Sandholm -
2020 Poster: Polynomial-Time Computation of Optimal Correlated Equilibria in Two-Player Extensive-Form Games with Public Chance Moves and Beyond »
Gabriele Farina · Tuomas Sandholm -
2020 Poster: No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium »
Andrea Celli · Alberto Marchesi · Gabriele Farina · Nicola Gatti -
2020 Oral: No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium »
Andrea Celli · Alberto Marchesi · Gabriele Farina · Nicola Gatti -
2020 Poster: Improving Policy-Constrained Kidney Exchange via Pre-Screening »
Duncan McElfresh · Michael Curry · Tuomas Sandholm · John Dickerson -
2020 Poster: Pipeline PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games »
Stephen McAleer · JB Lanier · Roy Fox · Pierre Baldi -
2019 Poster: Correlation in Extensive-Form Games: Saddle-Point Formulation and Benchmarks »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Poster: Efficient Regret Minimization Algorithm for Extensive-Form Correlated Equilibrium »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Spotlight: Efficient Regret Minimization Algorithm for Extensive-Form Correlated Equilibrium »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Poster: Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2018 : Regret Decomposition in Sequential Games with Convex Action Spaces and Losses »
Gabriele Farina -
2018 Poster: A Unified Framework for Extensive-Form Game Abstraction with Bounds »
Christian Kroer · Tuomas Sandholm -
2018 Poster: Depth-Limited Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm · Brandon Amos -
2018 Poster: Solving Large Sequential Games with the Excessive Gap Technique »
Christian Kroer · Gabriele Farina · Tuomas Sandholm -
2018 Poster: Practical exact algorithm for trembling-hand equilibrium refinements in games »
Gabriele Farina · Nicola Gatti · Tuomas Sandholm -
2018 Spotlight: Solving Large Sequential Games with the Excessive Gap Technique »
Christian Kroer · Gabriele Farina · Tuomas Sandholm -
2018 Poster: Ex ante coordination and collusion in zero-sum multi-player extensive-form games »
Gabriele Farina · Andrea Celli · Nicola Gatti · Tuomas Sandholm -
2017 Poster: A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning »
Marc Lanctot · Vinicius Zambaldi · Audrunas Gruslys · Angeliki Lazaridou · Karl Tuyls · Julien Perolat · David Silver · Thore Graepel -
2017 Demonstration: Libratus: Beating Top Humans in No-Limit Poker »
Noam Brown · Tuomas Sandholm -
2017 Poster: Safe and Nested Subgame Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm -
2017 Oral: Safe and Nested Subgame Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm -
2016 Poster: Sample Complexity of Automated Mechanism Design »
Maria-Florina Balcan · Tuomas Sandholm · Ellen Vitercik -
2015 Poster: Regret-Based Pruning in Extensive-Form Games »
Noam Brown · Tuomas Sandholm -
2015 Demonstration: Claudico: The World's Strongest No-Limit Texas Hold'em Poker AI »
Noam Brown · Tuomas Sandholm -
2014 Poster: Diverse Randomized Agents Vote to Win »
Albert Jiang · Leandro Soriano Marcolino · Ariel Procaccia · Tuomas Sandholm · Nisarg Shah · Milind Tambe