Timezone: »

 
Spotlight
Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions
Alessandro Epasto · Mohammad Mahdian · Vahab Mirrokni · Emmanouil Zampetakis

Thu Dec 10 07:10 AM -- 07:20 AM (PST) @ Orals & Spotlights: Optimization/Theory
A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input. Our goal is to identify the optimal approximation-smoothness tradeoffs for different measures of approximation and smoothness. This leads to novel soft-max functions, each of which is optimal for a different application. The most commonly used soft-max function, called exponential mechanism, has optimal tradeoff between approximation measured in terms of expected additive approximation and smoothness measured with respect to Renyi Divergence. We introduce a soft-max function, called piece-wise linear soft-max, with optimal tradeoff between approximation, measured in terms of worst-case additive approximation and smoothness, measured with respect to l_q-norm. The worst-case approximation guarantee of the piece-wise linear mechanism enforces sparsity in the output of our soft-max function, a property that is known to be important in Machine Learning applications Martins et al. 16, Laha et al. 18 and is not satisfied by the exponential mechanism. Moreover, the l_q-smoothness is suitable for applications in Mechanism Design and Game Theory where the piece-wise linear mechanism outperforms the exponential mechanism. Finally, we investigate another soft-max function, called power mechanism, with optimal tradeoff between expected multiplicative approximation and smoothness with

respect to the Renyi Divergence, which provides improved theoretical and practical results in differentially private submodular optimization.

Author Information

Alessandro Epasto (Google)
Alessandro Epasto

I am a staff research scientist at Google, New York working in the Google Research Algorithms and Optimization team lead by Vahab Mirrokni. I received a Ph.D in computer science from Sapienza University of Rome, where I was advised by Professor Alessandro Panconesi and supported by the Google Europe Ph.D. Fellowship in Algorithms, 2011. I was also a post-doc at the department of computer science of Brown University in Providence (RI), USA where I was advised by Professor Eli Upfal. My research interests include algorithmic problems in machine learning and data mining, in particular in the areas of clustering, privacy, and large scale graphs analysis.

Mohammad Mahdian (Google Research)
Vahab Mirrokni (Google Research NYC)
Emmanouil Zampetakis (Massachusetts Institute of Technology)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors