Poster
The Lazy Online Subgradient Algorithm is Universal on Strongly Convex Domains
Daron Anderson · Douglas Leith
Keywords: [ Machine Learning ] [ Online Learning ] [ Optimization ]
Abstract:
We study Online Lazy Gradient Descent for optimisation on a strongly convex domain. The algorithm is known to achieve O(√N) regret against adversarial opponents; here we show it is universal in the sense that it also achieves O(logN) expected regret against i.i.d opponents. This improves upon the more complex meta-algorithm of Huang et al \cite{FTLBall} that only gets O(√NlogN) and O(logN) bounds. In addition we show that, unlike for the simplex, order bounds for pseudo-regret and expected regret are equivalent for strongly convex domains.
Chat is not available.