Skip to yearly menu bar Skip to main content


Poster

Online Composite Optimization Between Stochastic and Adversarial Environments

Yibo Wang · SIJIA CHEN · Wei Jiang · Wenhao Yang · Yuanyu Wan · Lijun Zhang

West Ballroom A-D #5808
[ ]
[ Paper [ Poster [ OpenReview
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: We study online composite optimization under the Stochastically Extended Adversarial (SEA) model. Specifically, each loss function consists of two parts: a fixed non-smooth and convex regularizer, and a time-varying function which can be chosen either stochastically, adversarially, or in a manner that interpolates between the two extremes. In this setting, we show that for smooth and convex time-varying functions, optimistic composite mirror descent (OptCMD) can obtain an O(σ1:T2+Σ1:T2) regret bound, where σ1:T2 and Σ1:T2 denote the cumulative stochastic variance and the cumulative adversarial variation of time-varying functions, respectively. For smooth and strongly convex time-varying functions, we establish an O((σmax2+Σmax2)log(σ1:T2+Σ1:T2)) regret bound, where σmax2 and Σmax2 denote the maximal stochastic variance and the maximal adversarial variation, respectively. For smooth and exp-concave time-varying functions, we achieve an O(dlog(σ1:T2+Σ1:T2)) bound where d denotes the dimensionality. Moreover, to deal with the unknown function type in practical problems, we propose a multi-level \textit{universal} algorithm that is able to achieve the desirable bounds for three types of time-varying functions simultaneously. It should be noticed that all our findings match existing bounds for the SEA model without the regularizer, which implies that there is \textit{no price} in regret bounds for the benefits gained from the regularizer.

Chat is not available.