Timezone: »
Poster
Efficient Generalization with Distributionally Robust Learning
Soumyadip Ghosh · Mark Squillante · Ebisa Wollega
Distributionally robust learning (DRL) is increasingly seen as a viable method to train machine learning models for improved model generalization. These min-max formulations, however, are more difficult to solve. We provide a new stochastic gradient descent algorithm to efficiently solve this DRL formulation. Our approach applies gradient descent to the outer minimization formulation and estimates the gradient of the inner maximization based on a sample average approximation. The latter uses a subset of the data sampled without replacement in each iteration, progressively increasing the subset size to ensure convergence. We rigorously establish convergence to a near-optimal solution under standard regularity assumptions and, for strongly convex losses, match the best known $O(\epsilon{ −1})$ rate of convergence up to a known threshold. Empirical results demonstrate the significant benefits of our approach over previous work in improving learning for model generalization.
Author Information
Soumyadip Ghosh (IBM Research)
Mark Squillante (IBM Research)
Ebisa Wollega
More from the Same Authors
-
2022 Poster: A Stochastic Linearized Augmented Lagrangian Method for Decentralized Bilevel Optimization »
Songtao Lu · Siliang Zeng · Xiaodong Cui · Mark Squillante · Lior Horesh · Brian Kingsbury · Jia Liu · Mingyi Hong -
2020 Poster: Quantifying the Empirical Wasserstein Distance to a Set of Measures: Beating the Curse of Dimensionality »
Nian Si · Jose Blanchet · Soumyadip Ghosh · Mark Squillante -
2020 Spotlight: Quantifying the Empirical Wasserstein Distance to a Set of Measures: Beating the Curse of Dimensionality »
Nian Si · Jose Blanchet · Soumyadip Ghosh · Mark Squillante -
2019 Poster: A Family of Robust Stochastic Operators for Reinforcement Learning »
Yingdong Lu · Mark Squillante · Chai Wah Wu