Timezone: »

Sample Complexity of Learning Mixture of Sparse Linear Regressions
Akshay Krishnamurthy · Arya Mazumdar · Andrew McGregor · Soumyabrata Pal

Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #51

In the problem of learning mixtures of linear regressions, the goal is to learn a col-lection of signal vectors from a sequence of (possibly noisy) linear measurements,where each measurement is evaluated on an unknown signal drawn uniformly fromthis collection. This setting is quite expressive and has been studied both in termsof practical applications and for the sake of establishing theoretical guarantees. Inthis paper, we consider the case where the signal vectors aresparse; this generalizesthe popular compressed sensing paradigm. We improve upon the state-of-the-artresults as follows: In the noisy case, we resolve an open question of Yin et al. (IEEETransactions on Information Theory, 2019) by showing how to handle collectionsof more than two vectors and present the first robust reconstruction algorithm, i.e.,if the signals are not perfectly sparse, we still learn a good sparse approximationof the signals. In the noiseless case, as well as in the noisy case, we show how tocircumvent the need for a restrictive assumption required in the previous work. Ourtechniques are quite different from those in the previous work: for the noiselesscase, we rely on a property of sparse polynomials and for the noisy case, we providenew connections to learning Gaussian mixtures and use ideas from the theory of

Author Information

Akshay Krishnamurthy (Microsoft)
Arya Mazumdar (University of Massachusetts Amherst)
Andrew McGregor (University of Massachusetts Amherst)
Soumyabrata Pal (University of Massachusetts Amherst)

I am a fourth year grad student in the Department of Computer Science at the University of Massachusetts Amherst.

More from the Same Authors