Timezone: »
Poster
Learning and Covering Sums of Independent Random Variables with Unbounded Support
Alkis Kalavasis · Konstantinos Stavropoulos · Emmanouil Zampetakis
We study the problem of covering and learning sums $X = X_1 + \cdots + X_n$ of independent integervalued random variables $X_i$ (SIIRVs) with infinite support. De et al. at FOCS 2018, showed that even when the collective support of $X_i$'s is of size $4$, the maximum value of the support necessarily appears in the sample complexity of learning $X$. In this work, we address two questions: (i) Are there general families of SIIRVs with infinite support that can be learned with sample complexity independent of both $n$ and the maximal element of the support? (ii) Are there general families of SIIRVs with infinite support that admit proper sparse covers in total variation distance? As for question (i), we provide a set of simple conditions that allow the infinitely supported SIIRV to be learned with complexity $ \text{poly}(1/\epsilon)$ bypassing the aforementioned lower bound. We further address question (ii) in the general setting where each variable $X_i$ has unimodal probability mass function and is a different member of some, possibly multiparameter, exponential family $\mathcal{E}$ that satisfies some structural properties. These properties allow $\mathcal{E}$ to contain heavy tailed and non logconcave distributions. Moreover, we show that for every $\epsilon > 0$, and every $k$parameter family $\mathcal{E}$ that satisfies some structural assumptions, there exists an algorithm with $\widetilde{O}(k) \cdot \text{poly}(1/\epsilon)$ samples that learns a sum of $n$ arbitrary members of $\mathcal{E}$ within $\epsilon$ in TV distance. The output of the learning algorithm is also a sum of random variables within the family $\mathcal{E}$. En route, we prove that any discrete unimodal exponential family with bounded constantdegree central moments can be approximated by the family corresponding to a bounded subset of the initial (unbounded) parameter space.
Author Information
Alkis Kalavasis (National Technical University of Athens)
Konstantinos Stavropoulos (University of Texas at Austin)
Emmanouil Zampetakis (UC Berkeley)
More from the Same Authors

2022 Panel: Panel 3B4: Learning and Coveringâ€¦ & Asymptotics of smoothedâ€¦ »
Jonathan NilesWeed · Konstantinos Stavropoulos 
2022 Poster: Linear Label Ranking with Bounded Noise »
Dimitris Fotakis · Alkis Kalavasis · Vasilis Kontonis · Christos Tzamos 
2022 Poster: Perfect Sampling from Pairwise Comparisons »
Dimitris Fotakis · Alkis Kalavasis · Christos Tzamos 
2022 Poster: Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes »
Alkis Kalavasis · Grigoris Velegkas · Amin Karbasi 
2020 Poster: Truncated Linear Regression in High Dimensions »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis 
2020 Poster: Optimal Approximation  Smoothness Tradeoffs for SoftMax Functions »
Alessandro Epasto · Mohammad Mahdian · Vahab Mirrokni · Emmanouil Zampetakis 
2020 Spotlight: Optimal Approximation  Smoothness Tradeoffs for SoftMax Functions »
Alessandro Epasto · Mohammad Mahdian · Vahab Mirrokni · Emmanouil Zampetakis 
2020 Poster: ConstantExpansion Suffices for Compressed Sensing with Generative Priors »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis 
2020 Spotlight: ConstantExpansion Suffices for Compressed Sensing with Generative Priors »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis