Poster
Parallelizing Support Vector Machines on Distributed Computers
Edward Y Chang · Kaihua Zhu · Hao Wang · hongjie Bai · Jian Li · Zhihuan Qiu · Hang Cui
[
Abstract
]
2007 Poster
Abstract:
Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let denote the number of training instances, the reduced matrix dimension after factorization ( is significantly smaller than ), and the number of machines. PSVM reduces the memory requirement from ( ) to ( ), and improves computation time to ( ). Empirical studies on up to computers shows PSVM to be effective.
Live content is unavailable. Log in and register to view live content