Timezone: »

Poster
A Projection-free Algorithm for Constrained Stochastic Multi-level Composition Optimization
Tesi Xiao · Krishnakumar Balasubramanian · Saeed Ghadimi

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #841
We propose a projection-free conditional gradient-type algorithm for smooth stochastic multi-level composition optimization, where the objective function is a nested composition of $T$ functions and the constraint set is a closed convex set. Our algorithm assumes access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle satisfying certain standard unbiasedness and second-moment assumptions. We show that the number of calls to the stochastic first-order oracle and the linear-minimization oracle required by the proposed algorithm, to obtain an $\epsilon$-stationary solution, are of order $\mathcal{O}_T(\epsilon^{-2})$ and $\mathcal{O}_T(\epsilon^{-3})$ respectively, where $\mathcal{O}_T$ hides constants in $T$. Notably, the dependence of these complexity bounds on $\epsilon$ and $T$ are separate in the sense that changing one does not impact the dependence of the bounds on the other. For the case of $T=1$, we also provide a high-probability convergence result that depends poly-logarithmically on the inverse confidence level. Moreover, our algorithm is parameter-free and does not require any (increasing) order of mini-batches to converge unlike the common practice in the analysis of stochastic conditional gradient-type algorithms.

#### Author Information

##### Tesi Xiao (University of California, Davis)

I am a final-year Ph.D. student at the Department of Statistics, UC Davis, advised by Prof. Krishna Balasubramanian. Prior to this, I received my B.S. degree in Statistics from Zhejiang University. My research interests lie at the interface of computational and algorithmic inferential problems arising in statistical machine learning. Precisely, I am working on the following topics: - Stochastic Optimization: Sample and computationally efficient optimization algorithms. - Reinforcement Learning and Bandits: Learning with dependent data and nonstationary environments. I am also interested in unsupervised learning (manifold learning), and deep learning (robustness, network pruning, neural architecture search).