NIPS 2010
Skip to yearly menu bar Skip to main content


Workshop

Low-rank Methods for Large-scale Machine Learning

Arthur Gretton · Michael W Mahoney · Mehryar Mohri · Ameet S Talwalkar

Westin: Alpine BC

Today's data-driven society is full of large-scale datasets. In the context of machine learning, these datasets are often represented by large matrices representing either a set of real-valued features for each point or pairwise similarities between points. Hence, modern learning problems in computer vision, natural language processing, computational biology, and other areas often face the daunting task of storing and operating on matrices with thousands to millions of entries. An attractive solution to this problem involves working with low-rank approximations of the original matrix. Low-rank approximation is at the core of widely used algorithms such as Principle Component Analysis, Multidimensional Scaling, Latent Semantic Indexing, and manifold learning. Furthermore, low-rank matrices appear in a wide variety of applications including lossy data compression, collaborative filtering, image processing, text analysis, matrix completion and metric learning. In this workshop, we aim to survey recent work on matrix approximation with an emphasis on usefulness for practical large-scale machine learning problems. We aim to provide a forum for researchers to discuss several important questions associated with low-rank approximation techniques.

Live content is unavailable. Log in and register to view live content