Timezone: »
We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algorithm for planning in a Markov Decision Process in which transitions have a finite support. We prove an upper bound on the number of sampled trajectories needed for MDP-GapE to identify a near-optimal action with high probability. This problem-dependent result is expressed in terms of the sub-optimality gaps of the state-action pairs that are visited during exploration. Our experiments reveal that MDP-GapE is also effective in practice, in contrast with other algorithms with sample complexity guarantees in the fixed-confidence setting, that are mostly theoretical.
Author Information
Anders Jonsson (Universitat Pompeu Fabra)
Emilie Kaufmann (CNRS)
Pierre Menard (Inria)
Omar Darwiche Domingues (Inria)
Edouard Leurent (INRIA)
PhD student in Reinforcement Learning, at: - INRIA SequeL project for sequential learning - INRIA Non-A project for finite-time control - Renault Group
Michal Valko (DeepMind)
Michal is a machine learning scientist in DeepMind Paris, SequeL team at Inria, and the lecturer of the master course Graphs in Machine Learning at l'ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. This means 1) reducing the “intelligence” that humans need to input into the system and 2) minimizing the data that humans need to spend inspecting, classifying, or “tuning” the algorithms. Another important feature of machine learning algorithms should be the ability to adapt to changing environments. That is why he is working in domains that are able to deal with minimal feedback, such as online learning, bandit algorithms, semi-supervised learning, and anomaly detection. Most recently he has worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. Structured learning requires more time and space resources and therefore the most recent work of Michal includes efficient approximations such as graph and matrix sketching with learning guarantees. In past, the common thread of Michal's work has been adaptive graph-based learning and its application to real-world applications such as recommender systems, medical error detection, and face recognition. His industrial collaborators include Adobe, Intel, Technicolor, and Microsoft Research. He received his Ph.D. in 2011 from the University of Pittsburgh under the supervision of Miloš Hauskrecht and after was a postdoc of Rémi Munos before taking a permanent position at Inria in 2012.
More from the Same Authors
-
2020 Poster: Robust-Adaptive Control of Linear Systems: beyond Quadratic Costs »
Edouard Leurent · Odalric-Ambrym Maillard · Denis Efimov -
2020 Poster: Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning »
Jean-Bastien Grill · Florian Strub · Florent Altché · Corentin Tallec · Pierre Richemond · Elena Buchatskaya · Carl Doersch · Bernardo Avila Pires · Zhaohan Guo · Mohammad Gheshlaghi Azar · Bilal Piot · koray kavukcuoglu · Remi Munos · Michal Valko -
2020 Poster: Sampling from a k-DPP without looking at all items »
Daniele Calandriello · Michal Derezinski · Michal Valko -
2020 Spotlight: Sampling from a k-DPP without looking at all items »
Daniele Calandriello · Michal Derezinski · Michal Valko -
2020 Oral: Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning »
Jean-Bastien Grill · Florian Strub · Florent Altché · Corentin Tallec · Pierre Richemond · Elena Buchatskaya · Carl Doersch · Bernardo Avila Pires · Zhaohan Guo · Mohammad Gheshlaghi Azar · Bilal Piot · koray kavukcuoglu · Remi Munos · Michal Valko -
2020 Oral: Robust-Adaptive Control of Linear Systems: beyond Quadratic Costs »
Edouard Leurent · Odalric-Ambrym Maillard · Denis Efimov -
2020 Poster: Statistical Efficiency of Thompson Sampling for Combinatorial Semi-Bandits »
Pierre Perrault · Etienne Boursier · Michal Valko · Vianney Perchet -
2020 Poster: Sub-sampling for Efficient Non-Parametric Bandit Exploration »
Dorian Baudry · Emilie Kaufmann · Odalric-Ambrym Maillard -
2020 Spotlight: Sub-sampling for Efficient Non-Parametric Bandit Exploration »
Dorian Baudry · Emilie Kaufmann · Odalric-Ambrym Maillard -
2019 : Coffee + Posters »
Changhao Chen · Nils Gählert · Edouard Leurent · Johannes Lehner · Apratim Bhattacharyya · Harkirat Singh Behl · TeckYian Lim · Shiho Kim · Jelena Novosel · Błażej Osiński · Arindam Das · Ruobing Shen · Jeffrey Hawke · Joachim Sicking · Babak Shahian Jahromi · Theja Tulabandhula · Claudio Michaelis · Evgenia Rusak · WENHANG BAO · Hazem Rashed · JP Chen · Amin Ansari · Jaekwang Cha · Mohamed Zahran · Daniele Reda · Jinhyuk Kim · Kim Dohyun · Ho Suk · Junekyo Jhung · Alexander Kister · Matthias Fahrland · Adam Jakubowski · Piotr Miłoś · Jean Mercat · Bruno Arsenali · Silviu Homoceanu · Xiao-Yang Liu · Philip Torr · Ahmad El Sallab · Ibrahim Sobh · Anurag Arnab · Christopher Galias -
2019 Poster: Planning in entropy-regularized Markov decision processes and games »
Jean-Bastien Grill · Omar Darwiche Domingues · Pierre Menard · Remi Munos · Michal Valko -
2019 Poster: Budgeted Reinforcement Learning in Continuous State Space »
Nicolas Carrara · Edouard Leurent · Romain Laroche · Tanguy Urvoy · Odalric-Ambrym Maillard · Olivier Pietquin -
2018 : Poster Session »
Zihan Ding · David Mguni · Yuzheng Zhuang · Edouard Leurent · Takuma Oda · Yulia Tachibana · Paweł Gora · Neema Davis · Nemanja Djuric · Fang-Chieh Chou · elmira amirloo