Timezone: »

XDO: A Double Oracle Algorithm for Extensive-Form Games
Stephen McAleer · JB Lanier · Kevin A Wang · Pierre Baldi · Roy Fox

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @

Policy Space Response Oracles (PSRO) is a reinforcement learning (RL) algorithm for two-player zero-sum games that has been empirically shown to find approximate Nash equilibria in large games. Although PSRO is guaranteed to converge to an approximate Nash equilibrium and can handle continuous actions, it may take an exponential number of iterations as the number of information states (infostates) grows. We propose Extensive-Form Double Oracle (XDO), an extensive-form double oracle algorithm for two-player zero-sum games that is guaranteed to converge to an approximate Nash equilibrium linearly in the number of infostates. Unlike PSRO, which mixes best responses at the root of the game, XDO mixes best responses at every infostate. We also introduce Neural XDO (NXDO), where the best response is learned through deep RL. In tabular experiments on Leduc poker, we find that XDO achieves an approximate Nash equilibrium in a number of iterations an order of magnitude smaller than PSRO. Experiments on a modified Leduc poker game and Oshi-Zumo show that tabular XDO achieves a lower exploitability than CFR with the same amount of computation. We also find that NXDO outperforms PSRO and NFSP on a sequential multidimensional continuous-action game. NXDO is the first deep RL method that can find an approximate Nash equilibrium in high-dimensional continuous-action sequential games.

Author Information

Stephen McAleer (UC Irvine)
JB Lanier (University of California Irvine)
Kevin A Wang (UC Irvine)

Applying to PhD programs for Fall 2022

Pierre Baldi (UC Irvine)
Roy Fox (UC Irvine)

[Roy Fox](http://roydfox.com/) is a postdoc at UC Berkeley working with [Ion Stoica](http://people.eecs.berkeley.edu/~istoica/) in the Real-Time Intelligent Secure Explainable lab ([RISELab](https://rise.cs.berkeley.edu/)), and with [Ken Goldberg](http://goldberg.berkeley.edu/) in the Laboratory for Automation Science and Engineering ([AUTOLAB](http://autolab.berkeley.edu/)). His research interests include reinforcement learning, dynamical systems, information theory, automation, and the connections between these fields. His current research focuses on automatic discovery of hierarchical control structures in deep reinforcement learning and in imitation learning of robotic tasks. Roy holds a MSc in Computer Science from the [Technion](http://www.cs.technion.ac.il/), under the supervision of [Moshe Tennenholtz](http://iew3.technion.ac.il/Home/Users/Moshet.phtml), and a PhD in Computer Science from the [Hebrew University](http://www.cs.huji.ac.il/), under the supervision of [Naftali Tishby](http://www.cs.huji.ac.il/~tishby/). He was an exchange PhD student with [Larry Abbott](http://www.cs.huji.ac.il/~tishby/) and [Liam Paninski](http://www.stat.columbia.edu/~liam/) at the [Center for Theoretical Neuroscience](http://www.neurotheory.columbia.edu/) at Columbia University, and a research intern at Microsoft Research.

More from the Same Authors