Timezone: »

Practical Large-Scale Optimization for Max-norm Regularization
Jason D Lee · Benjamin Recht · Russ Salakhutdinov · Nati Srebro · Joel A Tropp

Tue Dec 07 12:00 AM -- 12:00 AM (PST) @

The max-norm was proposed as a convex matrix regularizer by Srebro et al (2004) and was shown to be empirically superior to the trace-norm for collaborative filtering problems. Although the max-norm can be computed in polynomial time, there are currently no practical algorithms for solving large-scale optimization problems that incorporate the max-norm. The present work uses a factorization technique of Burer and Monteiro (2003) to devise scalable first-order algorithms for convex programs involving the max-norm. These algorithms are applied to solve huge collaborative filtering, graph cut, and clustering problems. Empirically, the new methods outperform mature techniques from all three areas.

Author Information

Jason D Lee (University of Southern California)
Benjamin Recht (UW-Madison)
Russ Salakhutdinov (Carnegie Mellon University)
Nati Srebro (TTI-Chicago)
Joel A Tropp (Caltech)

Joel A. Tropp is Professor of Applied & Computational Mathematics at California Institute of Technology. He earned the Ph.D. degree in Computational Applied Mathematics from the University of Texas at Austin in 2004. Prof. Tropp’s work lies at the interface of applied mathematics, electrical engineering, computer science, and statistics. The bulk of this research concerns the theoretical and computational aspects of sparse approximation, compressive sampling, and randomized linear algebra. He has also worked extensively on the properties of structured random matrices. Prof. Tropp has received several major awards for young researchers, including the 2007 ONR Young Investigator Award and the 2008 Presidential Early Career Award for Scientists and Engineers. He is also winner of the 32nd annual award for Excellence in Teaching from the Associated Students of the California Institute of Technology.

More from the Same Authors