`

Timezone: »

 
Poster
Hyperparameter Tuning is All You Need for LISTA
Xiaohan Chen · Jialin Liu · Zhangyang Wang · Wotao Yin

Wed Dec 08 04:30 PM -- 06:00 PM (PST) @

Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces the concept of unrolling an iterative algorithm and training it like a neural network. It has had great success on sparse recovery. In this paper, we show that adding momentum to intermediate variables in the LISTA network achieves a better convergence rate and, in particular, the network with instance-optimal parameters is superlinearly convergent. Moreover, our new theoretical results lead to a practical approach of automatically and adaptively calculating the parameters of a LISTA network layer based on its previous layers. Perhaps most surprisingly, such an adaptive-parameter procedure reduces the training of LISTA to tuning only three hyperparameters from data: a new record set in the context of the recent advances on trimming down LISTA complexity. We call this new ultra-light weight network HyperLISTA. Compared to state-of-the-art LISTA models, HyperLISTA achieves almost the same performance on seen data distributions and performs better when tested on unseen distributions (specifically, those with different sparsity levels and nonzero magnitudes). Code is available: https://github.com/VITA-Group/HyperLISTA.

Author Information

Xiaohan Chen (The University of Texas at Austin)
Jialin Liu (Alibaba DAMO Academy)
Zhangyang Wang (UT Austin)
Wotao Yin (Alibaba US, DAMO Academy)

More from the Same Authors