Timezone: »
The performance of standard algorithms for Independent Component Analysis quickly deteriorates under the addition of Gaussian noise. This is partially due to a common first step that typically consists of whitening, i.e., applying Principal Component Analysis (PCA) and rescaling the components to have identity covariance, which is not invariant under Gaussian noise. In our paper we develop the first practical algorithm for Independent Component Analysis that is provably invariant under Gaussian noise. The two main contributions of this work are as follows: 1. We develop and implement a more efficient version of a Gaussian noise invariant decorrelation (quasi-orthogonalization) algorithm using Hessians of the cumulant functions. 2. We propose a very simple and efficient fixed-point GI-ICA (Gradient Iteration ICA) algorithm, which is compatible with quasi-orthogonalization, as well as with the usual PCA-based whitening in the noiseless case. The algorithm is based on a special form of gradient iteration (different from gradient descent). We provide an analysis of our algorithm demonstrating fast convergence following from the basic properties of cumulants. We also present a number of experimental comparisons with the existing methods, showing superior results on noisy data and very competitive performance in the noiseless case.
Author Information
James R Voss
Luis Rademacher (The Ohio State University)
Mikhail Belkin (Ohio State University)
More from the Same Authors
-
2021 Poster: Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures »
Yuan Cao · Quanquan Gu · Mikhail Belkin -
2021 Poster: Multiple Descent: Design Your Own Generalization Curve »
Lin Chen · Yifei Min · Mikhail Belkin · Amin Karbasi -
2018 Poster: Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate »
Mikhail Belkin · Daniel Hsu · Partha P Mitra -
2017 Poster: Diving into the shallows: a computational perspective on large-scale shallow learning »
SIYUAN MA · Mikhail Belkin -
2017 Spotlight: Diving into the shallows: a computational perspective on large-scale shallow learning »
SIYUAN MA · Mikhail Belkin -
2016 Poster: Graphons, mergeons, and so on! »
Justin Eldridge · Mikhail Belkin · Yusu Wang -
2016 Oral: Graphons, mergeons, and so on! »
Justin Eldridge · Mikhail Belkin · Yusu Wang -
2016 Poster: Clustering with Bregman Divergences: an Asymptotic Analysis »
Chaoyue Liu · Mikhail Belkin -
2015 Poster: A Pseudo-Euclidean Iteration for Optimal Recovery in Noisy ICA »
James R Voss · Mikhail Belkin · Luis Rademacher -
2014 Poster: Learning with Fredholm Kernels »
Qichao Que · Mikhail Belkin · Yusu Wang -
2013 Workshop: Modern Nonparametric Methods in Machine Learning »
Arthur Gretton · Mladen Kolar · Samory Kpotufe · John Lafferty · Han Liu · Bernhard Schölkopf · Alexander Smola · Rob Nowak · Mikhail Belkin · Lorenzo Rosasco · peter bickel · Yue Zhao -
2013 Poster: Inverse Density as an Inverse Problem: the Fredholm Equation Approach »
Qichao Que · Mikhail Belkin -
2013 Spotlight: Inverse Density as an Inverse Problem: the Fredholm Equation Approach »
Qichao Que · Mikhail Belkin -
2011 Poster: Data Skeletonization via Reeb Graphs »
Xiaoyin Ge · Issam I Safa · Mikhail Belkin · Yusu Wang -
2009 Poster: Semi-supervised Learning using Sparse Eigenfunction Bases »
Kaushik Sinha · Mikhail Belkin -
2007 Spotlight: The Value of Labeled and Unlabeled Examples when the Model is Imperfect »
Kaushik Sinha · Mikhail Belkin -
2007 Poster: The Value of Labeled and Unlabeled Examples when the Model is Imperfect »
Kaushik Sinha · Mikhail Belkin -
2006 Poster: On the Relation Between Low Density Separation, Spectral Clustering and Graph Cuts »
Hariharan Narayanan · Mikhail Belkin · Partha Niyogi -
2006 Poster: Convergence of Laplacian Eigenmaps »
Mikhail Belkin · Partha Niyogi