Skip to yearly menu bar Skip to main content


Poster

Improved Dynamic Regret for Non-degenerate Functions

Lijun Zhang · Tianbao Yang · Jinfeng Yi · Rong Jin · Zhi-Hua Zhou

Pacific Ballroom #62

Keywords: [ Learning Theory ] [ Online Learning ] [ Convex Optimization ] [ Hardness of Learning and Approximations ]


Abstract:

Recently, there has been a growing research interest in the analysis of dynamic regret, which measures the performance of an online learner against a sequence of local minimizers. By exploiting the strong convexity, previous studies have shown that the dynamic regret can be upper bounded by the path-length of the comparator sequence. In this paper, we illustrate that the dynamic regret can be further improved by allowing the learner to query the gradient of the function multiple times, and meanwhile the strong convexity can be weakened to other non-degenerate conditions. Specifically, we introduce the squared path-length, which could be much smaller than the path-length, as a new regularity of the comparator sequence. When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length. We then extend our theoretical guarantee to functions that are semi-strongly convex or self-concordant. To the best of our knowledge, this is the first time that semi-strong convexity and self-concordance are utilized to tighten the dynamic regret.

Live content is unavailable. Log in and register to view live content