Skip to yearly menu bar Skip to main content


Poster

Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match

Amin Karbasi · Hamed Hassani · Aryan Mokhtari · Zebang Shen

East Exhibition Hall B, C #169

Keywords: [ Submodular Optimization ] [ Optimization ]


Abstract: In this paper, we develop \scg~(\text{SCG}{++++}), the first efficient variant of a conditional gradient method for maximizing a continuous submodular function subject to a convex constraint. Concretely, for a monotone and continuous DR-submodular function, \SCGPP achieves a tight [(11/e)\OPTϵ] solution while using O(1/ϵ2) stochastic gradients and O(1/ϵ) calls to the linear optimization oracle. The best previously known algorithms either achieve a suboptimal [(1/2)\OPTϵ] solution with O(1/ϵ2) stochastic gradients or the tight [(11/e)\OPTϵ] solution with suboptimal O(1/ϵ3) stochastic gradients. We further provide an information-theoretic lower bound to showcase the necessity of \OM(1/ϵ2) stochastic oracle queries in order to achieve [(11/e)\OPTϵ] for monotone and DR-submodular functions. This result shows that our proposed \SCGPP enjoys optimality in terms of both approximation guarantee, i.e., (11/e) approximation factor, and stochastic gradient evaluations, i.e., O(1/ϵ2) calls to the stochastic oracle. By using stochastic continuous optimization as an interface, we also show that it is possible to obtain the [(11/e)\OPTϵ] tight approximation guarantee for maximizing a monotone but stochastic submodular set function subject to a general matroid constraint after at most O(n2/ϵ2) calls to the stochastic function value, where n is the number of elements in the ground set.

Live content is unavailable. Log in and register to view live content