Skip to yearly menu bar Skip to main content


Poster

Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks

Urs Köster · Tristan Webb · Xin Wang · Marcel Nassar · Arjun K Bansal · William Constable · Oguz Elibol · Stewart Hall · Luke Hornof · Amir Khosrowshahi · Carey Kloss · Ruby J Pai · Naveen Rao

Pacific Ballroom #75

Keywords: [ Deep Learning ] [ Efficient Inference Methods ] [ Efficient Training Methods ] [ Hardware and Systems ]


Abstract:

Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range. We validate Flexpoint by training AlexNet, a deep residual network and a generative adversarial network, using a simulator implemented with the \emph{neon} deep learning framework. We demonstrate that 16-bit Flexpoint closely matches 32-bit floating point in training all three models, without any need for tuning of model hyperparameters. Our results suggest Flexpoint as a promising numerical format for future hardware for training and inference.

Live content is unavailable. Log in and register to view live content