`

Timezone: »

 
Poster
Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe
Quentin Berthet · Vianney Perchet

Tue Dec 05 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #64 #None

We consider the problem of bandit optimization, inspired by stochastic optimization and online learning problems with bandit feedback. In this problem, the objective is to minimize a global loss function of all the actions, not necessarily a cumulative loss. This framework allows us to study a very general class of problems, with applications in statistics, machine learning, and other fields. To solve this problem, we analyze the Upper-Confidence Frank-Wolfe algorithm, inspired by techniques for bandits and convex optimization. We give theoretical guarantees for the performance of this algorithm over various classes of functions, and discuss the optimality of these results.

Author Information

Quentin Berthet (University of Cambridge)

Quentin Berthet is a University Lecturer in the Statslab, in the DPMMS at Cambridge, and a faculty fellow at the Alan Turing Institute. He is a former student of the Ecole Polytechnique, received a Ph.D. from Princeton University in 2014, and was a CMI postdoctoral fellow at Caltech.

Vianney Perchet (ENS Paris-Saclay & Criteo Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors