Skip to yearly menu bar Skip to main content


Poster

Iterative Thresholding Algorithm for Sparse Inverse Covariance Estimation

Benjamin Rolfs · Bala Rajaratnam · Dominique Guillot · Arian Maleki · Ian Wong

Harrah’s Special Events Center 2nd Floor

Abstract: Sparse graphical modelling/inverse covariance selection is an important problem in machine learning and has seen significant advances in recent years. A major focus has been on methods which perform model selection in high dimensions. To this end, numerous convex $\ell_1$ regularization approaches have been proposed in the literature. It is not however clear which of these methods are optimal in any well-defined sense. A major gap in this regard pertains to the rate of convergence of proposed optimization methods. To address this, an iterative thresholding algorithm for numerically solving the $\ell_1$-penalized maximum likelihood problem for sparse inverse covariance estimation is presented. The proximal gradient method considered in this paper is shown to converge at a linear rate, a result which is the first of its kind for numerically solving the sparse inverse covariance estimation problem. The convergence rate is provided in closed form, and is related to the condition number of the optimal point. Numerical results demonstrating the proven rate of convergence are presented.

Live content is unavailable. Log in and register to view live content