Timezone: »

Sparse Filtering
Jiquan Ngiam · Pang Wei Koh · Zhenghao Chen · Sonia A Bhaskar · Andrew Y Ng

Wed Dec 14 08:45 AM -- 02:59 PM (PST) @

Unsupervised feature learning has been shown to be effective at learning representations that perform well on image, video and audio classification. However, many existing feature learning algorithms are hard to use and require extensive hyperparameter tuning. In this work, we present sparse filtering, a simple new algorithm which is efficient and only has one hyperparameter, the number of features to learn. In contrast to most other feature learning methods, sparse filtering does not explicitly attempt to construct a model of the data distribution. Instead, it optimizes a simple cost function -- the sparsity of L2-normalized features -- which can easily be implemented in a few lines of MATLAB code. Sparse filtering scales gracefully to handle high-dimensional inputs, and can also be used to learn meaningful features in additional layers with greedy layer-wise stacking. We evaluate sparse filtering on natural images, object classification (STL-10), and phone classification (TIMIT), and show that our method works well on a range of different modalities.

Author Information

Jiquan Ngiam (Stanford University)
Pang Wei Koh (University of Washington)
Zhenghao Chen (Stanford University)
Sonia A Bhaskar (Stanford University)
Andrew Y Ng (DeepLearning.AI)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors