NIPS 2018 Expo Talk

Dec. 2, 2018

Expo Schedule »

Intel nGraph: Unlocking Next-Generation Performance with Deep Learning Compilers

Sponsor: Intel AI

Organizers:
Adam Procter (Intel AI)

Presenters:
Adam Procter (Intel AI), Adam Straw (Intel AI), Robert Earhart (Intel AI)

Abstract:

The rapid growth of deep learning in demanding, large-scale, real-world applications has led to a rapid increase in demand for high-performance training and inference solutions. This demand is reflected in the growth of investment in deep learning performance by major hardware manufacturers, including a proliferation of new application-specific accelerators. But performance is not driven by hardware alone. In the software realm, a new class of deep learning compilers has emerged, which brings to bear both classic and novel compiler techniques in order to maximize the performance of deep learning systems. Recently developed deep learning compilers include NNVM/TVM from the University of Washington and Amazon, Glow from Facebook, XLA from Google, and nGraph from Intel. These deep learning compilers unlock a wealth of optimizations which take a view of the whole data-flow graph. This approach achieves substantial speedups over the approach favored by existing frameworks, where an interpreter orchestrates the invocation of per-op compute kernels which must be optimized specifically for the framework and hardware target.

In this talk, we provide a comprehensive overview of the nGraph deep learning compiler from from Intel. The talk will include:

(1) an overview of the motivation for deep learning compilers, and their design challenges; (2) a deep dive into the design of nGraph, including the design of its intermediate representation, optimization pipelines, runtime interface, and framework integration; and (3) a brief look at related efforts and future directions in deep learning compiler research.