Timezone: »
The option framework integrates temporal abstraction into the reinforcement learning model through the introduction of macro-actions (i.e., options). Recent works leveraged on the mapping of Markov decision processes (MDPs) with options to semi-MDPs (SMDPs) and introduced SMDP-versions of exploration-exploitation algorithms (e.g., RMAX-SMDP and UCRL-SMDP) to analyze the impact of options on the learning performance. Nonetheless, the PAC-SMDP sample complexity of RMAX-SMDP can hardly be translated into equivalent PAC-MDP theoretical guarantees, while UCRL-SMDP requires prior knowledge of the parameters characterizing the distributions of the cumulative reward and duration of each option, which are hardly available in practice. In this paper, we remove this limitation by combining the SMDP view together with the inner Markov structure of options into a novel algorithm whose regret performance matches UCRL-SMDP's up to an additive regret term. We show scenarios where this term is negligible and the advantage of temporal abstraction is preserved. We also report preliminary empirical result supporting the theoretical findings.
Author Information
Ronan Fruit (Inria Lille)
Matteo Pirotta (Facebook AI Research)
Alessandro Lazaric (Facebook Artificial Intelligence Research)
Emma Brunskill (CMU)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Spotlight: Regret Minimization in MDPs with Options without Prior Knowledge »
Thu. Dec 7th 01:35 -- 01:40 AM Room Hall A
More from the Same Authors
-
2019 Poster: Exploration Bonus for Regret Minimization in Discrete and Continuous Average Reward MDPs »
Jian QIAN · Ronan Fruit · Matteo Pirotta · Alessandro Lazaric -
2019 Poster: Regret Bounds for Learning State Representations in Reinforcement Learning »
Ronald Ortner · Matteo Pirotta · Alessandro Lazaric · Ronan Fruit · Odalric-Ambrym Maillard -
2018 Poster: Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes »
Ronan Fruit · Matteo Pirotta · Alessandro Lazaric -
2018 Spotlight: Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes »
Ronan Fruit · Matteo Pirotta · Alessandro Lazaric -
2017 : Poster Session »
David Abel · Nicholas Denis · Maria Eckstein · Ronan Fruit · Karan Goel · Joshua Gruenstein · Anna Harutyunyan · Martin Klissarov · Xiangyu Kong · Aviral Kumar · Saurabh Kumar · Miao Liu · Daniel McNamee · Shayegan Omidshafiei · Silviu Pitis · Paulo Rauber · Melrose Roderick · Tianmin Shu · Yizhou Wang · Shangtong Zhang -
2017 : Spotlights & Poster Session »
David Abel · Nicholas Denis · Maria Eckstein · Ronan Fruit · Karan Goel · Joshua Gruenstein · Anna Harutyunyan · Martin Klissarov · Xiangyu Kong · Aviral Kumar · Saurabh Kumar · Miao Liu · Daniel McNamee · Shayegan Omidshafiei · Silviu Pitis · Paulo Rauber · Melrose Roderick · Tianmin Shu · Yizhou Wang · Shangtong Zhang -
2017 Poster: Compatible Reward Inverse Reinforcement Learning »
Alberto Maria Metelli · Matteo Pirotta · Marcello Restelli -
2017 Poster: Adaptive Batch Size for Safe Policy Gradients »
Matteo Papini · Matteo Pirotta · Marcello Restelli -
2017 Poster: Efficient Second-Order Online Kernel Learning with Adaptive Embedding »
Daniele Calandriello · Alessandro Lazaric · Michal Valko -
2016 : Learning to improve learning: ML in the classroom »
Emma Brunskill -
2016 Poster: (Withdrawn)Only H is left: Near-tight Episodic PAC RL »
Christoph Dann · Emma Brunskill -
2015 Poster: Sample Complexity of Episodic Fixed-Horizon Reinforcement Learning »
Christoph Dann · Emma Brunskill -
2014 Workshop: Novel Trends and Applications in Reinforcement Learning »
Csaba Szepesvari · Marc Deisenroth · Sergey Levine · Pedro Ortega · Brian Ziebart · Emma Brunskill · Naftali Tishby · Gerhard Neumann · Daniel Lee · Sridhar Mahadevan · Pieter Abbeel · David Silver · Vicenç Gómez -
2014 Poster: Exploiting easy data in online optimization »
Amir Sani · Gergely Neu · Alessandro Lazaric -
2014 Poster: Best-Arm Identification in Linear Bandits »
Marta Soare · Alessandro Lazaric · Remi Munos -
2014 Spotlight: Exploiting easy data in online optimization »
Amir Sani · Gergely Neu · Alessandro Lazaric -
2014 Poster: Sparse Multi-Task Reinforcement Learning »
Daniele Calandriello · Alessandro Lazaric · Marcello Restelli -
2013 Poster: Adaptive Step-Size for Policy Gradient Methods »
Matteo Pirotta · Marcello Restelli · Luca Bascetta -
2013 Poster: Sequential Transfer in Multi-armed Bandit with Finite Set of Models »
Mohammad Gheshlaghi azar · Alessandro Lazaric · Emma Brunskill -
2012 Poster: Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence »
Victor Gabillon · Mohammad Ghavamzadeh · Alessandro Lazaric -
2012 Poster: Risk-Aversion in Multi-armed Bandits »
Amir Sani · Alessandro Lazaric · Remi Munos -
2011 Poster: Multi-Bandit Best Arm Identification »
Victor Gabillon · Mohammad Ghavamzadeh · Alessandro Lazaric · Sebastien Bubeck -
2011 Poster: Transfer from Multiple MDPs »
Alessandro Lazaric · Marcello Restelli -
2010 Spotlight: LSTD with Random Projections »
Mohammad Ghavamzadeh · Alessandro Lazaric · Odalric-Ambrym Maillard · Remi Munos -
2010 Poster: LSTD with Random Projections »
Mohammad Ghavamzadeh · Alessandro Lazaric · Odalric-Ambrym Maillard · Remi Munos -
2007 Spotlight: Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods »
Alessandro Lazaric · Marcello Restelli · Andrea Bonarini -
2007 Poster: Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods »
Alessandro Lazaric · Marcello Restelli · Andrea Bonarini