Timezone: »
Kernel Principal Component Analysis (KPCA) is a popular generalization of linear PCA that allows non-linear feature extraction. In KPCA, data in the input space is mapped to higher (usually) dimensional feature space where the data can be linearly modeled. The feature space is typically induced implicitly by a kernel function, and linear PCA in the feature space is performed via the kernel trick. However, due to the implicitness of the feature space, some extensions of PCA such as robust PCA cannot be directly generalized to KPCA. This paper presents a technique to overcome this problem, and extends it to a unified framework for treating noise, missing data, and outliers in KPCA. Our method is based on a novel cost function to perform inference in KPCA. Extensive experiments, in both synthetic and real data, show that our algorithm outperforms existing methods.
Author Information
Minh Hoai Nguyen (Stony Brook University)
Fernando D De la Torre (Carnegie Mellon University)
Related Events (a corresponding poster, oral, or spotlight)
-
2008 Spotlight: Robust Kernel Principal Component Analysis »
Thu. Dec 11th 01:26 -- 01:27 AM Room
More from the Same Authors
-
2022 : Making Text-to-Image Diffusion Models Zero-Shot Image-to-Image Editors by Inferring "Random Seeds" »
Chen Henry Wu · Fernando D De la Torre -
2022 Poster: Generative Visual Prompt: Unifying Distributional Control of Pre-Trained Generative Models »
Chen Henry Wu · Saman Motamed · Shaunak Srivastava · Fernando D De la Torre -
2011 Poster: Matrix Completion for Image Classification »
Ricardo S Cabral · Fernando D De la Torre · Joao P Costeira · Alexandre Bernardino -
2009 Poster: Canonical Time Warping for Alignment of Human Behavior »
Feng Zhou · Fernando D De la Torre