Timezone: »
In this paper we study the fundamental problems of maximizing a continuous non monotone submodular function over a hypercube, with and without coordinate-wise concavity. This family of optimization problems has several applications in machine learning, economics, and communication systems. Our main result is the first 1/2 approximation algorithm for continuous submodular function maximization; this approximation factor of is the best possible for algorithms that use only polynomially many queries. For the special case of DR-submodular maximization, we provide a faster 1/2-approximation algorithm that runs in (almost) linear time. Both of these results improve upon prior work [Bian et al., 2017, Soma and Yoshida, 2017, Buchbinder et al., 2012].
Our first algorithm is a single-pass algorithm that uses novel ideas such as reducing the guaranteed approximation problem to analyzing a zero-sum game for each coordinate, and incorporates the geometry of this zero-sum game to fix the value at this coordinate. Our second algorithm is a faster single-pass algorithm that exploits coordinate-wise concavity to identify a monotone equilibrium condition sufficient for getting the required approximation guarantee, and hunts for the equilibrium point using binary search. We further run experiments to verify the performance of our proposed algorithms in related machine learning applications.
Author Information
Rad Niazadeh (Stanford University)
Rad Niazadeh is an Assistant Professor of Operations Management at the University of Chicago Booth School of Business. He studies the interplay between algorithms, incentives and learning in online marketplaces and platforms. Prior to joining Booth, he was a Motwani postdoctoral fellow at Stanford University, Department of Computer Science, and a visiting faculty in the market algorithms group at Google Research NYC. He received his PhD in Computer Science from Cornell University. Rad has received the INFORMS Revenue Management and Pricing Dissertation Award (honorable mention), the Google PhD Fellowship in Market Algorithms, Stanford Motwani fellowship, and Cornell Jacobs fellowship.
Tim Roughgarden (Stanford University)
Joshua Wang (Google)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization »
Thu. Dec 6th 03:05 -- 03:20 PM Room Room 517 CD
More from the Same Authors
-
2021 Poster: Margin-Independent Online Multiclass Learning via Convex Geometry »
Guru Guruganesh · Allen Liu · Jon Schneider · Joshua Wang -
2020 Poster: Stateful Posted Pricing with Vanishing Regret via Dynamic Deterministic Markov Decision Processes »
Yuval Emek · Ron Lavi · Rad Niazadeh · Yangguang Shi -
2019 Poster: Efficient Rematerialization for Deep Networks »
Ravi Kumar · Manish Purohit · Zoya Svitkina · Erik Vee · Joshua Wang -
2017 Workshop: Learning in the Presence of Strategic Behavior »
Nika Haghtalab · Yishay Mansour · Tim Roughgarden · Vasilis Syrgkanis · Jennifer Wortman Vaughan -
2017 Poster: Online Prediction with Selfish Experts »
Tim Roughgarden · Okke Schrijvers -
2017 Poster: Approximation Bounds for Hierarchical Clustering: Average Linkage, Bisecting K-means, and Local Search »
Benjamin Moseley · Joshua Wang -
2017 Oral: Approximation Bounds for Hierarchical Clustering: Average Linkage, Bisecting K-means, and Local Search »
Benjamin Moseley · Joshua Wang -
2015 Poster: On the Pseudo-Dimension of Nearly Optimal Auctions »
Jamie Morgenstern · Tim Roughgarden -
2015 Spotlight: On the Pseudo-Dimension of Nearly Optimal Auctions »
Jamie Morgenstern · Tim Roughgarden -
2013 Poster: Marginals-to-Models Reducibility »
Tim Roughgarden · Michael Kearns