Timezone: »

 
Poster
Maximal Sparsity with Deep Networks?
Bo Xin · Yizhou Wang · Wen Gao · David Wipf · Baoyuan Wang

Mon Dec 05 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #126
The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal $\ell_0$-norm representations in regimes where existing methods fail. The resulting system, which can effectively learn novel iterative sparse estimation algorithms, is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.

Author Information

Bo Xin (Peking University)
Yizhou Wang (Peking University)
Wen Gao (peking university)
David Wipf (Microsoft Research)
Baoyuan Wang (Microsoft Research)

More from the Same Authors