Timezone: »
Poster
Gradient Methods for Submodular Maximization
Hamed Hassani · Mahdi Soltanolkotabi · Amin Karbasi
In this paper, we study the problem of maximizing continuous submodular functions that naturally arise in many learning applications such as those involving utility functions in active learning and sensing, matrix approximations and network inference. Despite the apparent lack of convexity in such functions, we prove that stochastic projected gradient methods can provide strong approximation guarantees for maximizing continuous submodular functions with convex constraints. More specifically, we prove that for monotone continuous DR-submodular functions, all fixed points of projected gradient ascent provide a factor $1/2$ approximation to the global maxima. We also study stochastic gradient methods and show that after $\mathcal{O}(1/\epsilon^2)$ iterations these methods reach solutions which achieve in expectation objective values exceeding $(\frac{\text{OPT}}{2}-\epsilon)$. An immediate application of our results is to maximize submodular functions that are defined stochastically, i.e. the submodular function is defined as an expectation over a family of submodular functions with an unknown distribution. We will show how stochastic gradient methods are naturally well-suited for this setting, leading to a factor $1/2$ approximation when the function is monotone. In particular, it allows us to approximately maximize discrete, monotone submodular optimization problems via projected gradient ascent on a continuous relaxation, directly connecting the discrete and continuous domains. Finally, experiments on real data demonstrate that our projected gradient methods consistently achieve the best utility compared to other continuous baselines while remaining competitive in terms of computational effort.
Author Information
Hamed Hassani (UPenn)
Mahdi Soltanolkotabi (University of Southern california)
Amin Karbasi (Yale)
More from the Same Authors
-
2022 Poster: Collaborative Learning of Discrete Distributions under Heterogeneity and Communication Constraints »
Xinmeng Huang · Donghwan Lee · Edgar Dobriban · Hamed Hassani -
2022 Poster: Probable Domain Generalization via Quantile Risk Minimization »
Cian Eastwood · Alexander Robey · Shashank Singh · Julius von Kügelgen · Hamed Hassani · George J. Pappas · Bernhard Schölkopf -
2022 Poster: FedAvg with Fine Tuning: Local Updates Lead to Representation Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2022 Poster: Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds »
Aritra Mitra · Arman Adibi · George J. Pappas · Hamed Hassani -
2021 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Rebecca Willett · christopher metzler · Mahdi Soltanolkotabi -
2020 Poster: Submodular Maximization Through Barrier Functions »
Ashwinkumar Badanidiyuru · Amin Karbasi · Ehsan Kazemi · Jan Vondrak -
2020 Poster: Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View »
Christos Thrampoulidis · Samet Oymak · Mahdi Soltanolkotabi -
2020 Poster: Continuous Submodular Maximization: Beyond DR-Submodularity »
Moran Feldman · Amin Karbasi -
2020 Poster: Sinkhorn Natural Gradient for Generative Models »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Poster: Sinkhorn Barycenter via Functional Gradient Descent »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Spotlight: Sinkhorn Natural Gradient for Generative Models »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Spotlight: Submodular Maximization Through Barrier Functions »
Ashwinkumar Badanidiyuru · Amin Karbasi · Ehsan Kazemi · Jan Vondrak -
2020 Session: Orals & Spotlights Track 32: Optimization »
Hamed Hassani · Jeffrey A Bilmes -
2020 Poster: Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition »
Lin Chen · Qian Yu · Hannah Lawrence · Amin Karbasi -
2020 Poster: Online MAP Inference of Determinantal Point Processes »
Aditya Bhaskara · Amin Karbasi · Silvio Lattanzi · Morteza Zadimoghaddam -
2020 Poster: Minimax Lower Bounds for Transfer Learning with Linear and One-hidden Layer Neural Networks »
Mohammadreza Mousavi Kalan · Zalan Fabian · Salman Avestimehr · Mahdi Soltanolkotabi -
2020 Poster: Submodular Meta-Learning »
Arman Adibi · Aryan Mokhtari · Hamed Hassani -
2019 : Denoising via Early Stopping »
Mahdi Soltanolkotabi -
2019 Poster: Adaptive Sequence Submodularity »
Marko Mitrovic · Ehsan Kazemi · Moran Feldman · Andreas Krause · Amin Karbasi -
2019 Poster: Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback »
Mingrui Zhang · Lin Chen · Hamed Hassani · Amin Karbasi -
2019 Poster: Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match »
Amin Karbasi · Hamed Hassani · Aryan Mokhtari · Zebang Shen -
2019 Poster: Robust and Communication-Efficient Collaborative Learning »
Amirhossein Reisizadeh · Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani -
2019 Poster: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2019 Spotlight: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2018 Poster: Do Less, Get More: Streaming Submodular Maximization with Subsampling »
Moran Feldman · Amin Karbasi · Ehsan Kazemi -
2018 Spotlight: Do Less, Get More: Streaming Submodular Maximization with Subsampling »
Moran Feldman · Amin Karbasi · Ehsan Kazemi -
2017 Workshop: Discrete Structures in Machine Learning »
Yaron Singer · Jeff A Bilmes · Andreas Krause · Stefanie Jegelka · Amin Karbasi -
2017 Poster: Interactive Submodular Bandit »
Lin Chen · Andreas Krause · Amin Karbasi -
2017 Poster: Streaming Weak Submodularity: Interpreting Neural Networks on the Fly »
Ethan Elenberg · Alex Dimakis · Moran Feldman · Amin Karbasi -
2017 Oral: Streaming Weak Submodularity: Interpreting Neural Networks on the Fly »
Ethan Elenberg · Alex Dimakis · Moran Feldman · Amin Karbasi -
2017 Poster: Learning ReLUs via Gradient Descent »
Mahdi Soltanolkotabi -
2017 Poster: Stochastic Submodular Maximization: The Case of Coverage Functions »
Mohammad Karimi · Mario Lucic · Hamed Hassani · Andreas Krause -
2016 Poster: Estimating the Size of a Large Network and its Communities from a Random Sample »
Lin Chen · Amin Karbasi · Forrest W. Crawford -
2016 Poster: Fast Distributed Submodular Cover: Public-Private Data Summarization »
Baharan Mirzasoleiman · Morteza Zadimoghaddam · Amin Karbasi -
2015 Poster: Distributed Submodular Cover: Succinctly Summarizing Massive Data »
Baharan Mirzasoleiman · Amin Karbasi · Ashwinkumar Badanidiyuru · Andreas Krause -
2015 Spotlight: Distributed Submodular Cover: Succinctly Summarizing Massive Data »
Baharan Mirzasoleiman · Amin Karbasi · Ashwinkumar Badanidiyuru · Andreas Krause