`

Timezone: »

 
Poster
Alternating minimization for dictionary learning with random initialization
Niladri Chatterji · Peter Bartlett

Tue Dec 05 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #45 #None
We present theoretical guarantees for an alternating minimization algorithm for the dictionary learning/sparse coding problem. The dictionary learning problem is to factorize vector samples $y^{1},y^{2},\ldots, y^{n}$ into an appropriate basis (dictionary) $A^*$ and sparse vectors $x^{1*},\ldots,x^{n*}$. Our algorithm is a simple alternating minimization procedure that switches between $\ell_1$ minimization and gradient descent in alternate steps. Dictionary learning and specifically alternating minimization algorithms for dictionary learning are well studied both theoretically and empirically. However, in contrast to previous theoretical analyses for this problem, we replace the condition on the operator norm (that is, the largest magnitude singular value) of the true underlying dictionary $A^*$ with a condition on the matrix infinity norm (that is, the largest magnitude term). This not only allows us to get convergence rates for the error of the estimated dictionary measured in the matrix infinity norm, but also ensures that a random initialization will provably converge to the global optimum. Our guarantees are under a reasonable generative model that allows for dictionaries with growing operator norms, and can handle an arbitrary level of overcompleteness, while having sparsity that is information theoretically optimal. We also establish upper bounds on the sample complexity of our algorithm.

Author Information

Niladri Chatterji (UC Berkeley)
Peter Bartlett (UC Berkeley)
Peter Bartlett

Peter Bartlett is professor of Computer Science and Statistics at the University of California at Berkeley, Associate Director of the Simons Institute for the Theory of Computing, and Director of the Foundations of Data Science Institute. He has previously held positions at the Queensland University of Technology, the Australian National University and the University of Queensland. His research interests include machine learning and statistical learning theory, and he is the co-author of the book Neural Network Learning: Theoretical Foundations. He has been Institute of Mathematical Statistics Medallion Lecturer, winner of the Malcolm McIntosh Prize for Physical Scientist of the Year, and Australian Laureate Fellow, and he is a Fellow of the IMS, Fellow of the ACM, and Fellow of the Australian Academy of Science.

More from the Same Authors