Timezone: »

 
Workshop
Low-rank Methods for Large-scale Machine Learning
Arthur Gretton · Michael W Mahoney · Mehryar Mohri · Ameet S Talwalkar

Sat Dec 11 07:30 AM -- 06:30 PM (PST) @ Westin: Alpine BC
Event URL: http://www.eecs.berkeley.edu/~ameet/low-rank-nips10/ »

Today's data-driven society is full of large-scale datasets. In the context of machine learning, these datasets are often represented by large matrices representing either a set of real-valued features for each point or pairwise similarities between points. Hence, modern learning problems in computer vision, natural language processing, computational biology, and other areas often face the daunting task of storing and operating on matrices with thousands to millions of entries. An attractive solution to this problem involves working with low-rank approximations of the original matrix. Low-rank approximation is at the core of widely used algorithms such as Principle Component Analysis, Multidimensional Scaling, Latent Semantic Indexing, and manifold learning. Furthermore, low-rank matrices appear in a wide variety of applications including lossy data compression, collaborative filtering, image processing, text analysis, matrix completion and metric learning. In this workshop, we aim to survey recent work on matrix approximation with an emphasis on usefulness for practical large-scale machine learning problems. We aim to provide a forum for researchers to discuss several important questions associated with low-rank approximation techniques.

Author Information

Arthur Gretton (Gatsby Unit, UCL)

Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit at UCL. He received degrees in Physics and Systems Engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing and Communications Laboratory at the University of Cambridge. He previously worked at the MPI for Biological Cybernetics, and at the Machine Learning Department, Carnegie Mellon University. Arthur's recent research interests in machine learning include the design and training of generative models, both implicit (e.g. GANs) and explicit (high/infinite dimensional exponential family models), nonparametric hypothesis testing, and kernel methods. He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, an Area Chair for NeurIPS in 2008 and 2009, a Senior Area Chair for NeurIPS in 2018, an Area Chair for ICML in 2011 and 2012, and a member of the COLT Program Committee in 2013. Arthur was program chair for AISTATS in 2016 (with Christian Robert), tutorials chair for ICML 2018 (with Ruslan Salakhutdinov), workshops chair for ICML 2019 (with Honglak Lee), program chair for the Dali workshop in 2019 (with Krikamol Muandet and Shakir Mohammed), and co-organsier of the Machine Learning Summer School 2019 in London (with Marc Deisenroth).

Michael W Mahoney (UC Berkeley)
Mehryar Mohri (Courant Inst. of Math. Sciences & Google Research)
Ameet S Talwalkar (CMU)

More from the Same Authors