Skip to yearly menu bar Skip to main content


Feature-distributed sparse regression: a screen-and-clean approach

Jiyan Yang · Michael Mahoney · Michael Saunders · Yuekai Sun

Area 5+6+7+8 #118

Keywords: [ Sparsity and Feature Selection ] [ Large Scale Learning and Big Data ]


Most existing approaches to distributed sparse regression assume the data is partitioned by samples. However, for high-dimensional data (D >> N), it is more natural to partition the data by features. We propose an algorithm to distributed sparse regression when the data is partitioned by features rather than samples. Our approach allows the user to tailor our general method to various distributed computing platforms by trading-off the total amount of data (in bits) sent over the communication network and the number of rounds of communication. We show that an implementation of our approach is capable of solving L1-regularized L2 regression problems with millions of features in minutes.

Live content is unavailable. Log in and register to view live content