Timezone: »
Commonly used in many applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers. The basic idea is to learn a decomposition of some data matrix of interest into low rank and sparse components, the latter representing unwanted outliers. Although the resulting problem is typically NP-hard, convex relaxations provide a computationally-expedient alternative with theoretical support. However, in practical regimes performance guarantees break down and a variety of non-convex alternatives, including Bayesian-inspired models, have been proposed to boost estimation quality. Unfortunately though, without additional a priori knowledge none of these methods can significantly expand the critical operational range such that exact principal subspace recovery is possible. Into this mix we propose a novel pseudo-Bayesian algorithm that explicitly compensates for design weaknesses in many existing non-convex approaches leading to state-of-the-art performance with a sound analytical foundation.
Author Information
Tae-Hyun Oh (KAIST)
Tae-Hyun Oh is a postdoctoral associate at MIT CSAIL from Aug, 2017. He received the B.E. degree (The highest ranking) in Computer Engineering from Kwang-Woon University, South Korea in 2010, and the M.S. and Ph.D degrees in Electrical Engineering from KAIST, South Korea in 2012 and 2017, respectively. He was a research intern in the Visual Computing group, Microsoft Research, Beijing and in the Cognitive group, Microsoft Research, Redmond in 2014 and 2016, respectively. He was a recipient of Microsoft Research Asia fellowship, a gold prize of Samsung HumanTech thesis award, twice Qualcomm Innovation awards and top research achievement awards from KAIST. His research interests include robust computer vision and machine learning.
Yasuyuki Matsushita (Osaka University)
In So Kweon (KAIST)
David Wipf (Microsoft Research)
More from the Same Authors
-
2021 Spotlight: On the Value of Infinite Gradients in Variational Autoencoder Models »
Bin Dai · Li Wenliang · David Wipf -
2021 Poster: A Biased Graph Neural Network Sampler with Near-Optimal Regret »
Qingru Zhang · David Wipf · Quan Gan · Le Song -
2021 Poster: GRIN: Generative Relation and Intention Network for Multi-agent Trajectory Prediction »
Longyuan Li · Jian Yao · Li Wenliang · Tong He · Tianjun Xiao · Junchi Yan · David Wipf · Zheng Zhang -
2021 Poster: From Canonical Correlation Analysis to Self-supervised Graph Neural Networks »
Hengrui Zhang · Qitian Wu · Junchi Yan · David Wipf · Philip S Yu -
2021 Poster: On the Value of Infinite Gradients in Variational Autoencoder Models »
Bin Dai · Li Wenliang · David Wipf -
2017 Oral: From Bayesian Sparsity to Gated Recurrent Nets »
Hao He · Bo Xin · Satoshi Ikehata · David Wipf -
2017 Poster: From Bayesian Sparsity to Gated Recurrent Nets »
Hao He · Bo Xin · Satoshi Ikehata · David Wipf -
2016 Poster: Maximal Sparsity with Deep Networks? »
Bo Xin · Yizhou Wang · Wen Gao · David Wipf · Baoyuan Wang -
2013 Poster: Non-Uniform Camera Shake Removal Using a Spatially-Adaptive Sparse Penalty »
Haichao Zhang · David Wipf -
2013 Oral: Non-Uniform Camera Shake Removal Using a Spatially-Adaptive Sparse Penalty »
Haichao Zhang · David Wipf