Timezone: »
Poster
Submodular + Concave
Siddharth Mitra · Moran Feldman · Amin Karbasi
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions and provide constant factor approximation guarantees for (non-convex/non-concave) continuous submodular functions. In this work, we initiate the study of the maximization of functions of the form $F(x) = G(x) +C(x)$ over a solvable convex body $P$, where $G$ is a smooth DR-submodular function and $C$ is a smooth concave function. This class of functions is a strict extension of both concave and continuous DR-submodular functions for which no theoretical guarantee is known. We provide a suite of Frank-Wolfe style algorithms, which, depending on the nature of the objective function (i.e., if $G$ and $C$ are monotone or not, and non-negative or not) and on the nature of the set $P$ (i.e., whether it is downward closed or not), provide $1-1/e$, $1/e$, or $1/2$ approximation guarantees. We then use our algorithms to get a framework to smoothly interpolate between choosing a diverse set of elements from a given ground set (corresponding to the mode of a determinantal point process) and choosing a clustered set of elements (corresponding to the maxima of a suitable concave function). Additionally, we apply our algorithms to various functions in the above class (DR-submodular + concave) in both constrained and unconstrained settings, and show that our algorithms consistently outperform natural baselines.
Author Information
Siddharth Mitra (Yale University)
Moran Feldman (University of Haifa)
Amin Karbasi (Yale University)
More from the Same Authors
-
2022 : Exact Gradient Computation for Spiking Neural Networks »
Jane Lee · Saeid Haghighatshoar · Amin Karbasi -
2022 Poster: Using Partial Monotonicity in Submodular Maximization »
Loay Mualem · Moran Feldman -
2022 Poster: Submodular Maximization in Clean Linear Time »
Wenxin Li · Moran Feldman · Ehsan Kazemi · Amin Karbasi -
2022 Poster: Universal Rates for Interactive Learning »
Steve Hanneke · Amin Karbasi · Shay Moran · Grigoris Velegkas -
2022 Poster: Black-Box Generalization: Stability of Zeroth-Order Learning »
Konstantinos Nikolakakis · Farzin Haddadpour · Dionysis Kalogerias · Amin Karbasi -
2022 Poster: Reinforcement Learning with Logarithmic Regret and Policy Switches »
Grigoris Velegkas · Zhuoran Yang · Amin Karbasi -
2022 Poster: Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes »
Alkis Kalavasis · Grigoris Velegkas · Amin Karbasi -
2022 Poster: Fast Neural Kernel Embeddings for General Activations »
Insu Han · Amir Zandieh · Jaehoon Lee · Roman Novak · Lechao Xiao · Amin Karbasi -
2022 Poster: On Optimal Learning Under Targeted Data Poisoning »
Steve Hanneke · Amin Karbasi · Mohammad Mahmoody · Idan Mehalel · Shay Moran -
2021 Poster: An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks »
Shashank Rajput · Kartik Sreenivasan · Dimitris Papailiopoulos · Amin Karbasi -
2021 Poster: Multiple Descent: Design Your Own Generalization Curve »
Lin Chen · Yifei Min · Mikhail Belkin · Amin Karbasi -
2021 Poster: Parallelizing Thompson Sampling »
Amin Karbasi · Vahab Mirrokni · Mohammad Shadravan -
2020 Poster: Continuous Submodular Maximization: Beyond DR-Submodularity »
Moran Feldman · Amin Karbasi -
2013 Poster: Noise-Enhanced Associative Memories »
Amin Karbasi · Amir Hesam Salavati · Amin Shokrollahi · Lav R Varshney -
2013 Poster: Distributed Submodular Maximization: Identifying Representative Elements in Massive Data »
Baharan Mirzasoleiman · Amin Karbasi · Rik Sarkar · Andreas Krause -
2013 Spotlight: Noise-Enhanced Associative Memories »
Amin Karbasi · Amir Hesam Salavati · Amin Shokrollahi · Lav R Varshney