Timezone: »

 
Oral
Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes
Hassan Ashtiani · Shai Ben-David · Nicholas Harvey · Christopher Liaw · Abbas Mehrabian · Yaniv Plan

Wed Dec 05 07:05 AM -- 07:20 AM (PST) @ Room 517 CD

We prove that ϴ(k d^2 / ε^2) samples are necessary and sufficient for learning a mixture of k Gaussians in R^d, up to error ε in total variation distance. This improves both the known upper bounds and lower bounds for this problem. For mixtures of axis-aligned Gaussians, we show that O(k d / ε^2) samples suffice, matching a known lower bound.

The upper bound is based on a novel technique for distribution learning based on a notion of sample compression. Any class of distributions that allows such a sample compression scheme can also be learned with few samples. Moreover, if a class of distributions has such a compression scheme, then so do the classes of products and mixtures of those distributions. The core of our main result is showing that the class of Gaussians in R^d has an efficient sample compression.

Author Information

Hassan Ashtiani (McMaster University)
Shai Ben-David (University of Waterloo)
Nicholas Harvey (University of British Columbia)
Christopher Liaw (University of British Columbia)
Abbas Mehrabian (Mcgill University)
Yaniv Plan (University of British Columbia)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors