`

Timezone: »

 
Poster
Snap ML: A Hierarchical Framework for Machine Learning
Celestine Dünner · Thomas Parnell · Dimitrios Sarigiannis · Nikolas Ioannou · Andreea Anghel · Gummadi Ravi · Madhusudanan Kandasamy · Haris Pozidis

Tue Dec 04 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #109

We describe a new software framework for fast training of generalized linear models. The framework, named Snap Machine Learning (Snap ML), combines recent advances in machine learning systems and algorithms in a nested manner to reflect the hierarchical architecture of modern computing systems. We prove theoretically that such a hierarchical system can accelerate training in distributed environments where intra-node communication is cheaper than inter-node communication. Additionally, we provide a review of the implementation of Snap ML in terms of GPU acceleration, pipelining, communication patterns and software architecture, highlighting aspects that were critical for achieving high performance. We evaluate the performance of Snap ML in both single-node and multi-node environments, quantifying the benefit of the hierarchical scheme and the data streaming functionality, and comparing with other widely-used machine learning software frameworks. Finally, we present a logistic regression benchmark on the Criteo Terabyte Click Logs dataset and show that Snap ML achieves the same test loss an order of magnitude faster than any of the previously reported results, including those obtained using TensorFlow and scikit-learn.

Author Information

Celestine Dünner (IBM Research)
Thomas Parnell (IBM Research)
Dimitrios Sarigiannis (IBM Research)
Nikolas Ioannou (IBM Research)
Andreea Anghel (IBM Research)
Gummadi Ravi (IBM Systems)
Madhusudanan Kandasamy (IBM Systems)
Haris Pozidis (IBM Research)

More from the Same Authors

  • 2020 Poster: SnapBoost: A Heterogeneous Boosting Machine »
    Thomas Parnell · Andreea Anghel · Małgorzata Łazuka · Nikolas Ioannou · Sebastian Kurella · Peshal Agarwal · Nikolaos Papandreou · Haris Pozidis
  • 2019 : Posters and Coffee »
    Sameer Kumar · Tomasz Kornuta · Oleg Bakhteev · Hui Guan · Xiaomeng Dong · Minsik Cho · Soeren Laue · Theodore Vasiloudis · Andreea Anghel · Erik Wijmans · Zeyuan Shang · Oleksii Kuchaiev · Ji Lin · Susan Zhang · Ligeng Zhu · Beidi Chen · VINU Joseph · Jialin Ding · Jonathan Raiman · Ahnjae Shin · Vithu Thangarasa · Anush Sankaran · Akhil Mathur · Martino Dazzi · Markus Löning · Darryl Ho · Emanuel Zgraggen · Supun Nakandala · Tomasz Kornuta · Rita Kuznetsova
  • 2019 Poster: SySCD: A System-Aware Parallel Coordinate Descent Algorithm »
    Nikolas Ioannou · Celestine Mendler-Dünner · Thomas Parnell
  • 2019 Spotlight: SySCD: A System-Aware Parallel Coordinate Descent Algorithm »
    Nikolas Ioannou · Celestine Mendler-Dünner · Thomas Parnell
  • 2018 : Posters (all accepted papers) + Break »
    Jianyu Wang · Denis Gudovskiy · Ziheng Jiang · Michael Kaufmann · Andreea Anghel · James Bradbury · Nikolas Ioannou · Nitin Agrawal · Emma Tosch · Gyeong-In Yu · Keno Fischer · Jarrett Revels · Giuseppe Siracusano · Yaoqing Yang · Jeff Johnson · Yang You · Hector Yuen · Chris Ying · Honglei Liu · Nikoli Dryden · Xiangxi Mo · YZH Wang · Amit Juneja · Micah Smith · Qian Yu · pramod gupta · Deepak Narayanan · Keshav Santhanam · Tim Capes · Abdul Dakkak · Norman Mu · Ke Deng · Liam Li · Joao Carreira · Luis Remis · Deepti Raghavan · Una-May O'Reilly · Amanpreet Singh · Mido Assran · Eugene Wu · Eytan Bakshy · Jinliang Wei · Mike Innes · Viral Shah · Haibin Lin · Conrad Sanderson · Ryan Curtin · Marcus Edel
  • 2017 Poster: Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems »
    Celestine Dünner · Thomas Parnell · Martin Jaggi