Timezone: »

 
Poster
Black-box optimization of noisy functions with unknown smoothness
Jean-Bastien Grill · Michal Valko · Remi Munos · Remi Munos

Mon Dec 07 04:00 PM -- 08:59 PM (PST) @ 210 C #89
We study the problem of black-box optimization of a function $f$ of any dimension, given function evaluations perturbed by noise. The function is assumed to be locally smooth around one of its global optima, but this smoothness is unknown. Our contribution is an adaptive optimization algorithm, POO or parallel optimistic optimization, that is able to deal with this setting. POO performs almost as well as the best known algorithms requiring the knowledge of the smoothness. Furthermore, POO works for a larger class of functions than what was previously considered, especially for functions that are difficult to optimize, in a very precise sense. We provide a finite-time analysis of POO's performance, which shows that its error after $n$ evaluations is at most a factor of $\sqrt{\ln n}$ away from the error of the best known optimization algorithms using the knowledge of the smoothness.

Author Information

Jean-Bastien Grill (INRIA Lille - Nord Europe)
Michal Valko (INRIA Lille - Nord Europe)
Michal Valko

Michal is a machine learning scientist in DeepMind Paris, tenured researcher at Inria, and the lecturer of the master course Graphs in Machine Learning at l'ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. This means 1) reducing the “intelligence” that humans need to input into the system and 2) minimizing the data that humans need to spend inspecting, classifying, or “tuning” the algorithms. That is why he is working on methods and settings that are able to deal with minimal feedback, such as deep reinforcement learning, bandit algorithms, or self-supervised learning. Michal is actively working on represenation learning and building worlds models. He is also working on deep (reinforcement) learning algorithm that have some theoretical underpinning. He has also worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. He received his Ph.D. in 2011 from the University of Pittsburgh under the supervision of Miloš Hauskrecht and after was a postdoc of Rémi Munos before taking a permanent position at Inria in 2012.

Remi Munos (Google DeepMind)
Remi Munos (Google DeepMind)

More from the Same Authors