Skip to yearly menu bar Skip to main content


Poster

The Lazy Online Subgradient Algorithm is Universal on Strongly Convex Domains

Daron Anderson · Douglas Leith

Keywords: [ Machine Learning ] [ Online Learning ] [ Optimization ]


Abstract: We study Online Lazy Gradient Descent for optimisation on a strongly convex domain. The algorithm is known to achieve $O(\sqrt N)$ regret against adversarial opponents; here we show it is universal in the sense that it also achieves $O(\log N)$ expected regret against i.i.d opponents. This improves upon the more complex meta-algorithm of Huang et al \cite{FTLBall} that only gets $O(\sqrt {N \log N})$ and $ O(\log N)$ bounds. In addition we show that, unlike for the simplex, order bounds for pseudo-regret and expected regret are equivalent for strongly convex domains.

Chat is not available.