Timezone: »
We describe a latent variable model for supervised dimensionality reduction and distance metric learning. The model discovers linear projections of high dimensional data that shrink the distance between similarly labeled inputs and expand the distance between differently labeled ones. The model’s continuous latent variables locate pairs of examples in a latent space of lower dimensionality. The model differs significantly from classical factor analysis in that the posterior distribution over these latent variables is not always multivariate Gaussian. Nevertheless we show that inference is completely tractable and derive an ExpectationMaximization (EM) algorithm for parameter estimation. We also compare the model to other approaches in distance metric learning. The model’s main advantage is its simplicity: at each iteration of the EM algorithm, the distance metric is reestimated by solving an unconstrained leastsquares problem. Experiments show that these simple updates are highly effective.
Author Information
Matt F Der (UC San Diego)
Lawrence Saul (UC San Diego)
More from the Same Authors

2011 Poster: Maximum Covariance Unfolding : Manifold Learning for Bimodal Data »
Vijay Mahadevan · Chi Wah Wong · Jose Costa Pereira · Tom Liu · Nuno Vasconcelos · Lawrence Saul 
2010 Talk: Manifold Learning »
Lawrence Saul 
2010 Poster: Latent Variable Models for Predicting File Dependencies in LargeScale Software Development »
Diane Hu · Laurens van der Maaten · Youngmin Cho · Lawrence Saul · Sorin Lerner 
2009 Poster: Kernel Methods for Deep Learning »
Youngmin Cho · Lawrence Saul 
2006 Poster: Large Margin Gaussian Mixture Models for Automatic Speech Recognition »
Fei Sha · Lawrence Saul 
2006 Talk: Large Margin Gaussian Mixture Models for Automatic Speech Recognition »
Fei Sha · Lawrence Saul 
2006 Poster: Graph Regularization for Maximum Variance Unfolding with an Application to Sensor Localization »
Kilian Q Weinberger · Fei Sha · Qihui Zhu · Lawrence Saul