Timezone: »

On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization
Ke Hou · Zirui Zhou · Anthony Man-Cho So · Zhi-Quan Luo

Thu Dec 05 07:00 PM -- 11:59 PM (PST) @ Harrah's Special Events Center, 2nd Floor #None

Motivated by various applications in machine learning, the problem of minimizing a convex smooth loss function with trace norm regularization has received much attention lately. Currently, a popular method for solving such problem is the proximal gradient method (PGM), which is known to have a sublinear rate of convergence. In this paper, we show that for a large class of loss functions, the convergence rate of the PGM is in fact linear. Our result is established without any strong convexity assumption on the loss function. A key ingredient in our proof is a new Lipschitzian error bound for the aforementioned trace norm-regularized problem, which may be of independent interest.

Author Information

Ke Hou (CUHK)
Zirui Zhou (CUHK)
Anthony Man-Cho So (CUHK)
Zhi-Quan Luo (University of Minnesota, Twin Cites)

More from the Same Authors