Timezone: »

Mirror Descent Meets Fixed Share (and feels no regret)
Nicolò Cesa-Bianchi · Pierre Gaillard · Gabor Lugosi · Gilles Stoltz

Wed Dec 05 07:00 PM -- 12:00 AM (PST) @ Harrah’s Special Events Center 2nd Floor #None

Mirror descent with an entropic regularizer is known to achieve shifting regret bounds that are logarithmic in the dimension. This is done using either a carefully designed projection or by a weight sharing technique. Via a novel unified analysis, we show that these two approaches deliver essentially equivalent bounds on a notion of regret generalizing shifting, adaptive, discounted, and other related regrets. Our analysis also captures and extends the generalized weight sharing technique of Bousquet and Warmuth, and can be refined in several ways, including improvements for small losses and adaptive tuning of parameters.

Author Information

Nicolò Cesa-Bianchi (Università degli Studi di Milano, Italy)
Pierre Gaillard (École Normale Supérieure)
Gabor Lugosi (Pompeu Fabra University, Barcelona)
Gabor Lugosi

Gabor Lugosi is an ICREA research professor at the Department of Economics and Business, Pompeu Fabra University, Barcelona. He received his Ph.D. from the Hungarian Academy of Sciences in 1991. His research has mostly focused on the mathematical aspects of machine learning and related topics in probability and mathematical statistics, including combinatorial statistics, the analysis of random structures, and information theory. He is a co-author of several monographs on pattern recognition, density estimation, online learning, and concentration inequalities.

Gilles Stoltz (HEC Paris)

More from the Same Authors