Skip to yearly menu bar Skip to main content


Poster
in
Workshop: OPT 2022: Optimization for Machine Learning

A Variable-Coefficient Nuclear Norm Penalty for Low Rank Inference

Nathan Wycoff · Ali Arab · Lisa Singh


Abstract: Low rank structure is expected in many applications, so it is often desirable to be able to specify cost functions that induce low rank.A common approach is to augment the cost with a penalty function approximating the rank function, such as the nuclear norm which is given by the $\ell_1$ norm of the matrix's singular values.This has the advantage of being a convex function, but it biases matrix entries towards zero.On the other hand, nonconvex approximations to the rank function can make better surrogates but invariably introduce additional hyperparameters.In this article, we instead study a weighted nuclear norm approach with learnable weights which provides the behavior of nonconvex penalties without introducing any additional hyperparameters.This approach can also benefit from the fast proximal methods which make nuclear norm approaches scalable.We demonstrate the potential of this technique by comparing it against the standard nuclear norm approach on synthetic and realistic matrix denoising and completion problems.We also outline the future work necessary to deploy this algorithm to large scale problems.

Chat is not available.