`

Timezone: »

 
Poster
An Online Method for A Class of Distributionally Robust Optimization with Non-convex Objectives
Qi Qi · Zhishuai Guo · Yi Xu · Rong Jin · Tianbao Yang

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

In this paper, we propose a practical online method for solving a class of distributional robust optimization (DRO) with non-convex objectives, which has important applications in machine learning for improving the robustness of neural networks. In the literature, most methods for solving DRO are based on stochastic primal-dual methods. However, primal-dual methods for DRO suffer from several drawbacks: (1) manipulating a high-dimensional dual variable corresponding to the size of data is time expensive; (2) they are not friendly to online learning where data is coming sequentially. To address these issues, we consider a class of DRO with an KL divergence regularization on the dual variables, transform the min-max problem into a compositional minimization problem, and propose practical duality-free online stochastic methods without requiring a large mini-batch size. We establish the state-of-the-art complexities of the proposed methods with and without a Polyak-Łojasiewicz (PL) condition of the objective. Empirical studies on large-scale deep learning tasks (i) demonstrate that our method can speed up the training by more than 2 times than baseline methods and save days of training time on a large-scale dataset with ∼ 265K images, and (ii) verify the supreme performance of DRO over Empirical Risk Minimization (ERM) on imbalanced datasets. Of independent interest, the proposed method can be also used for solving a family of stochastic compositional problems with state-of-the-art complexities.

Author Information

Qi Qi (University of Iowa)
Zhishuai Guo (University of Iowa)
Yi Xu (Alibaba Group U.S. Inc.)
Rong Jin (Alibaba)
Tianbao Yang (The University of Iowa)

More from the Same Authors