Skip to yearly menu bar Skip to main content


Poster

Parameter-free Regret in High Probability with Heavy Tails

Jiujia Zhang · Ashok Cutkosky

Hall J (level 1) #318

Keywords: [ Online Learning ] [ Parameter-free ] [ Regularization ] [ online convex optimization ] [ heavy tails ]


Abstract:

We present new algorithms for online convex optimization over unbounded domains that obtain parameter-free regret in high-probability given access only to potentially heavy-tailed subgradient estimates. Previous work in unbounded domains con- siders only in-expectation results for sub-exponential subgradients. Unlike in the bounded domain case, we cannot rely on straight-forward martingale concentration due to exponentially large iterates produced by the algorithm. We develop new regularization techniques to overcome these problems. Overall, with probability at most δ, for all comparators u our algorithm achieves regret O ̃(∥u∥T 1/p log(1/δ)) for subgradients with bounded pth moments for some p ∈ (1, 2].

Chat is not available.