Skip to yearly menu bar Skip to main content


Poster

Efficient Deep Approximation of GMMs

Shirin Jalali · Carl Nuzman · Iraj Saniee

East Exhibition Hall B + C #217

Keywords: [ Information Theory ] [ Spaces of Functions and Kernels ] [ Theory ]


Abstract: The universal approximation theorem states that any regular function can be approximated closely using a single hidden layer neural network. Some recent work has shown that, for some special functions, the number of nodes in such an approximation could be exponentially reduced with multi-layer neural networks. In this work, we extend this idea to a rich class of functions, namely the discriminant functions that arise in optimal Bayesian classification of Gaussian mixture models (GMMs) in $R^n$. We show that such functions can be approximated with arbitrary precision using $O(n)$ nodes in a neural network with two hidden layers (deep neural network), while in contrast, a neural network with a single hidden layer (shallow neural network) would require at least $O(\exp(n))$ nodes or exponentially large coefficients. Given the universality of the Gaussian distribution in the feature spaces of data, e.g., in speech, image and text, our results shed light on the observed efficiency of deep neural networks in practical classification problems.

Live content is unavailable. Log in and register to view live content