Timezone: »
We consider online learning problems under a a partial observability model capturing situations where the information conveyed to the learner is between full information and bandit feedback. In the simplest variant, we assume that in addition to its own loss, the learner also gets to observe losses of some other actions. The revealed losses depend on the learner's action and a directed observation system chosen by the environment. For this setting, we propose the first algorithm that enjoys near-optimal regret guarantees without having to know the observation system before selecting its actions. Along similar lines, we also define a new partial information setting that models online combinatorial optimization problems where the feedback received by the learner is between semi-bandit and full feedback. As the predictions of our first algorithm cannot be always computed efficiently in this setting, we propose another algorithm with similar properties and with the benefit of always being computationally efficient, at the price of a slightly more complicated tuning mechanism. Both algorithms rely on a novel exploration strategy called implicit exploration, which is shown to be more efficient both computationally and information-theoretically than previously studied exploration strategies for the problem.
Author Information
Tomáš Kocák (Inria Lille - Nord Europe)
Gergely Neu (Universitat Pompeu Fabra)
Michal Valko (DeepMind Paris and Inria Lille - Nord Europe)
Michal is a research scientist in DeepMind Paris and SequeL team at Inria Lille - Nord Europe, France, lead by Philippe Preux and Rémi Munos. He also teaches the course Graphs in Machine Learning at l'ENS Cachan. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. This means 1) reducing the “intelligence” that humans need to input into the system and 2) minimising the data that humans need spend inspecting, classifying, or “tuning” the algorithms. Another important feature of machine learning algorithms should be the ability to adapt to changing environments. That is why he is working in domains that are able to deal with minimal feedback, such as semi-supervised learning, bandit algorithms, and anomaly detection. The common thread of Michal's work has been adaptive graph-based learning and its application to the real world applications such as recommender systems, medical error detection, and face recognition. His industrial collaborators include Intel, Technicolor, and Microsoft Research. He received his PhD in 2011 from University of Pittsburgh under the supervision of Miloš Hauskrecht and after was a postdoc of Rémi Munos.
Remi Munos (Google DeepMind)
More from the Same Authors
-
2020 Poster: Improved Sample Complexity for Incremental Autonomous Exploration in MDPs »
Jean Tarbouriech · Matteo Pirotta · Michal Valko · Alessandro Lazaric -
2020 Poster: A Unifying View of Optimism in Episodic Reinforcement Learning »
Gergely Neu · Ciara Pike-Burke -
2020 Oral: Improved Sample Complexity for Incremental Autonomous Exploration in MDPs »
Jean Tarbouriech · Matteo Pirotta · Michal Valko · Alessandro Lazaric -
2019 Poster: Exact sampling of determinantal point processes with sublinear time preprocessing »
Michal Derezinski · Daniele Calandriello · Michal Valko -
2019 Poster: Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates »
Carlos Riquelme · Hugo Penedones · Damien Vincent · Hartmut Maennel · Sylvain Gelly · Timothy A Mann · Andre Barreto · Gergely Neu -
2019 Poster: Planning in entropy-regularized Markov decision processes and games »
Jean-Bastien Grill · Omar Darwiche Domingues · Pierre Menard · Remi Munos · Michal Valko -
2019 Poster: Beating SGD Saturation with Tail-Averaging and Minibatching »
Nicole Muecke · Gergely Neu · Lorenzo Rosasco -
2019 Poster: On two ways to use determinantal point processes for Monte Carlo integration »
Guillaume Gautier · Rémi Bardenet · Michal Valko -
2019 Poster: Multiagent Evaluation under Incomplete Information »
Mark Rowland · Shayegan Omidshafiei · Karl Tuyls · Julien Perolat · Michal Valko · Georgios Piliouras · Remi Munos -
2019 Spotlight: Multiagent Evaluation under Incomplete Information »
Mark Rowland · Shayegan Omidshafiei · Karl Tuyls · Julien Perolat · Michal Valko · Georgios Piliouras · Remi Munos -
2018 Poster: Optimistic optimization of a Brownian »
Jean-Bastien Grill · Michal Valko · Remi Munos -
2017 Poster: Online Influence Maximization under Independent Cascade Model with Semi-Bandit Feedback »
Zheng Wen · Branislav Kveton · Michal Valko · Sharan Vaswani -
2017 Poster: Efficient Second-Order Online Kernel Learning with Adaptive Embedding »
Daniele Calandriello · Alessandro Lazaric · Michal Valko -
2017 Poster: Boltzmann Exploration Done Right »
Nicolò Cesa-Bianchi · Claudio Gentile · Gergely Neu · Gabor Lugosi -
2016 Poster: Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning »
Jean-Bastien Grill · Michal Valko · Remi Munos -
2016 Oral: Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning »
Jean-Bastien Grill · Michal Valko · Remi Munos -
2015 Poster: Black-box optimization of noisy functions with unknown smoothness »
Jean-Bastien Grill · Michal Valko · Remi Munos · Remi Munos -
2015 Poster: Explore no more: Improved high-probability regret bounds for non-stochastic bandits »
Gergely Neu -
2014 Poster: Exploiting easy data in online optimization »
Amir Sani · Gergely Neu · Alessandro Lazaric -
2014 Poster: Active Regression by Stratification »
Sivan Sabato · Remi Munos -
2014 Poster: Best-Arm Identification in Linear Bandits »
Marta Soare · Alessandro Lazaric · Remi Munos -
2014 Poster: Bounded Regret for Finite-Armed Structured Bandits »
Tor Lattimore · Remi Munos -
2014 Spotlight: Exploiting easy data in online optimization »
Amir Sani · Gergely Neu · Alessandro Lazaric -
2014 Poster: Extreme bandits »
Alexandra Carpentier · Michal Valko -
2014 Poster: Online combinatorial optimization with stochastic decision sets and adversarial losses »
Gergely Neu · Michal Valko -
2014 Poster: Optimistic Planning in Markov Decision Processes Using a Generative Model »
Balázs Szörényi · Gunnar Kedenburg · Remi Munos -
2013 Workshop: Bayesian Optimization in Theory and Practice »
Matthew Hoffman · Jasper Snoek · Nando de Freitas · Michael A Osborne · Ryan Adams · Sebastien Bubeck · Philipp Hennig · Remi Munos · Andreas Krause -
2013 Poster: Thompson Sampling for 1-Dimensional Exponential Family Bandits »
Nathaniel Korda · Emilie Kaufmann · Remi Munos -
2013 Poster: Online learning in episodic Markovian decision processes by relative entropy policy search »
Alexander Zimin · Gergely Neu -
2013 Poster: Aggregating Optimistic Planning Trees for Solving Markov Decision Processes »
Gunnar Kedenburg · Raphael Fonteneau · Remi Munos -
2012 Poster: Bandit Algorithms boost Brain Computer Interfaces for motor-task selection of a brain-controlled button »
Joan Fruitet · Alexandra Carpentier · Remi Munos · Maureen Clerc -
2012 Poster: Adaptive Stratified Sampling for Monte-Carlo integration of Differentiable functions »
Alexandra Carpentier · Remi Munos -
2012 Poster: Risk-Aversion in Multi-armed Bandits »
Amir Sani · Alessandro Lazaric · Remi Munos -
2011 Poster: Finite Time Analysis of Stratified Sampling for Monte Carlo »
Alexandra Carpentier · Remi Munos -
2011 Poster: Selecting the State-Representation in Reinforcement Learning »
Odalric-Ambrym Maillard · Remi Munos · Daniil Ryabko -
2011 Poster: Sparse Recovery with Brownian Sensing »
Alexandra Carpentier · Odalric-Ambrym Maillard · Remi Munos -
2011 Session: Spotlight Session 2 »
Remi Munos -
2011 Session: Oral Session 1 »
Remi Munos -
2011 Poster: Optimistic Optimization of Deterministic Functions »
Remi Munos -
2011 Poster: Speedy Q-Learning »
Mohammad Gheshlaghi Azar · Remi Munos · Mohammad Ghavamzadeh · Hilbert J Kappen -
2010 Spotlight: Online Markov Decision Processes under Bandit Feedback »
Gergely Neu · András György · András Antos · Csaba Szepesvari -
2010 Poster: Online Markov Decision Processes under Bandit Feedback »
Gergely Neu · András György · Csaba Szepesvari · András Antos -
2010 Spotlight: LSTD with Random Projections »
Mohammad Ghavamzadeh · Alessandro Lazaric · Odalric-Ambrym Maillard · Remi Munos -
2010 Poster: LSTD with Random Projections »
Mohammad Ghavamzadeh · Alessandro Lazaric · Odalric-Ambrym Maillard · Remi Munos -
2010 Poster: Scrambled Objects for Least-Squares Regression »
Odalric-Ambrym Maillard · Remi Munos -
2010 Poster: Error Propagation for Approximate Policy and Value Iteration »
Amir-massoud Farahmand · Remi Munos · Csaba Szepesvari -
2009 Poster: Sensitivity analysis in HMMs with application to likelihood maximization »
Pierre-Arnaud Coquelin · Romain Deguest · Remi Munos -
2009 Poster: Compressed Least-Squares Regression »
Odalric-Ambrym Maillard · Remi Munos -
2008 Poster: Online Optimization in X-Armed Bandits »
Sebastien Bubeck · Remi Munos · Gilles Stoltz · Csaba Szepesvari -
2008 Poster: Algorithms for Infinitely Many-Armed Bandits »
Yizao Wang · Jean-Yves Audibert · Remi Munos -
2008 Spotlight: Algorithms for Infinitely Many-Armed Bandits »
Yizao Wang · Jean-Yves Audibert · Remi Munos -
2008 Poster: Particle Filter-based Policy Gradient in POMDPs »
Pierre-Arnaud Coquelin · Romain Deguest · Remi Munos -
2007 Poster: Fitted Q-iteration in continuous action-space MDPs »
Remi Munos · András Antos · Csaba Szepesvari