Timezone: »

 
Poster
Feature Clustering for Accelerating Parallel Coordinate Descent
Chad Scherrer · Ambuj Tewari · Mahantesh Halappanavar · David Haglin

Mon Dec 03 07:00 PM -- 12:00 AM (PST) @ Harrah’s Special Events Center 2nd Floor
Large scale $\ell_1$-regularized loss minimization problems arise in numerous applications such as compressed sensing and high dimensional supervised learning, including classification and regression problems. High performance algorithms and implementations are critical to efficiently solving these problems. Building upon previous work on coordinate descent algorithms for $\ell_1$ regularized problems, we introduce a novel family of algorithms called block-greedy coordinate descent that includes, as special cases, several existing algorithms such as SCD, Greedy CD, Shotgun, and Thread-greedy. We give a unified convergence analysis for the family of block-greedy algorithms. The analysis suggests that block-greedy coordinate descent can better exploit parallelism if features are clustered so that the maximum inner product between features in different blocks is small. Our theoretical convergence analysis is supported with experimental results using data from diverse real-world applications. We hope that algorithmic approaches and convergence analysis we provide will not only advance the field, but will also encourage researchers to systematically explore the design space of algorithms for solving large-scale $\ell_1$-regularization problems.

Author Information

Chad Scherrer (Galois, Inc)
Ambuj Tewari (University of Michigan)
Mahantesh Halappanavar (Pacific Northwest National Laboratory)
David Haglin (Pacific Northwest National Laboratory)

More from the Same Authors