NeurIPS 2019 Expo Demo
Efficient Deep Learning computing with Intel® Nervana™ Neural Network Processor for Training
Sponsor: Intel AI
Intel® Nervana Neural Network Processor for Training™ (Intel® Nervana™ NNP-T) is designed to maximize efficiency in power usage, memory and communication. The NNP-T focuses on increasing compute utilization for AI training needs instead of just peak TOPS.
The NNP-T allocates the die area judiciously between MACs, local and off-die memory, and communication (both on-die and off-die) in order to create a device with a large amount of compute that could be kept fed with data on problem sizes both large and small. The NNP-T keeps the compute fed and maximizes power efficiency by retaining data locally and reusing it as much as possible.
We will demonstrate end-to-end training of an image classification workload, ResNet50, using a popular deep learning framework.