Timezone: »
Poster
Learning Distributions Generated by SingleLayer ReLU Networks in the Presence of Arbitrary Outliers
Saikiran Bulusu · Geethu Joseph · M. Cenk Gursoy · Pramod Varshney
We consider a set of data samples such that a fraction of the samples are arbitrary outliers, and the rest are the output samples of a singlelayer neural network with rectified linear unit (ReLU) activation. Our goal is to estimate the parameters (weight matrix and bias vector) of the neural network, assuming the bias vector to be nonnegative. We estimate the network parameters using the gradient descent algorithm combined with either the median or trimmed meanbased filters to mitigate the effect of the arbitrary outliers. We then prove that $\tilde{O}\left( \frac{1}{p^2}+\frac{1}{\epsilon^2p}\right)$ samples and $\tilde{O}\left( \frac{d^2}{p^2}+ \frac{d^2}{\epsilon^2p}\right)$ time are sufficient for our algorithm to estimate the neural network parameters within an error of $\epsilon$ when the outlier probability is $1p$, where $2/3
Author Information
Saikiran Bulusu (Syracuse University)
Geethu Joseph (TU Delft)
M. Cenk Gursoy (Syracuse University)
Pramod Varshney (Syracuse University)
More from the Same Authors

2021 Poster: STEM: A Stochastic TwoSided Momentum Algorithm Achieving NearOptimal Sample and Communication Complexities for Federated Learning »
Prashant Khanduri · PRANAY SHARMA · Haibo Yang · Mingyi Hong · Jia Liu · Ketan Rajawat · Pramod Varshney