Timezone: »
Poster
A novel variational form of the Schatten-$p$ quasi-norm
Paris Giampouras · Rene Vidal · Athanasios Rontogiannis · Benjamin Haeffele
The Schatten-$p$ quasi-norm with $p\in(0,1)$ has recently gained considerable attention in various low-rank matrix estimation problems offering significant benefits over relevant convex heuristics such as the nuclear norm. However, due to the nonconvexity of the Schatten-$p$ quasi-norm, minimization suffers from two major drawbacks: 1) the lack of theoretical guarantees and 2) the high computational cost which is demanded for the minimization task even for trivial tasks such as finding stationary points. In an attempt to reduce the high computational cost induced by Schatten-$p$ quasi-norm minimization, variational forms, which are defined over smaller-size matrix factors whose product equals the original matrix, have been proposed. Here, we propose and analyze a novel {\it variational form of Schatten-$p$ quasi-norm} which, for the first time in the literature, is defined for any continuous value of $p\in(0,1]$ and decouples along the columns of the factorized matrices. The proposed form can be considered as the natural generalization of the well-known variational form of the nuclear norm to the nonconvex case i.e., for $p\in(0,1)$. Notably, low-rankness is now imposed via a group-sparsity promoting regularizer. The resulting formulation gives way to SVD-free algorithms thus offering lower computational complexity than the one that is induced by the original definition of the Schatten-$p$ quasi-norm. A local optimality analysis is provided which shows~that we can arrive at a local minimum of the original Schatten-$p$ quasi-norm problem by reaching a local minimum of the matrix factorization based surrogate problem. In addition, for the case of the squared Frobenious loss with linear operators obeying the restricted isometry property (RIP), a rank-one update scheme is proposed, which offers a way to escape poor local minima. Finally, the efficiency of our approach is empirically shown on a matrix completion problem.
Author Information
Paris Giampouras (The Johns Hopkins University)
Rene Vidal (Mathematical Institute for Data Science, Johns Hopkins University, USA)
Athanasios Rontogiannis (National Observatory of Athens)
Benjamin Haeffele (Johns Hopkins University)
More from the Same Authors
-
2022 Poster: Global Linear and Local Superlinear Convergence of IRLS for Non-Smooth Robust Regression »
Liangzu Peng · Christian Kümmerle · Rene Vidal -
2020 Poster: Conformal Symplectic and Relativistic Optimization »
Guilherme Franca · Jeremias Sulam · Daniel Robinson · Rene Vidal -
2020 Spotlight: Conformal Symplectic and Relativistic Optimization »
Guilherme Franca · Jeremias Sulam · Daniel Robinson · Rene Vidal -
2020 Poster: A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses »
Ambar Pal · Rene Vidal -
2019 : Keynote I – Rene Vidal (Johns Hopkins University) »
René Vidal -
2019 Poster: A Linearly Convergent Method for Non-Smooth Non-Convex Optimization on the Grassmannian with Applications to Robust Subspace and Dictionary Learning »
Zhihui Zhu · Tianyu Ding · Daniel Robinson · Manolis Tsakiris · René Vidal -
2018 Poster: Dual Principal Component Pursuit: Improved Analysis and Efficient Algorithms »
Zhihui Zhu · Yifan Wang · Daniel Robinson · Daniel Naiman · René Vidal · Manolis Tsakiris -
2012 Poster: Finding Exemplars from Pairwise Dissimilarities via Simultaneous Sparse Recovery »
Ehsan Elhamifar · Guillermo Sapiro · René Vidal -
2011 Poster: Sparse Manifold Clustering and Embedding »
Ehsan Elhamifar · René Vidal -
2006 Poster: Online Clustering of Moving Subspaces »
René Vidal