Program Highlights »
Sat Dec 9th 08:00 AM -- 06:30 PM @ 102 A+B
Machine Learning on the Phone and other Consumer Devices
Hrishikesh Aradhye · Joaquin Quinonero Candela · Rohit Prasad

Workshop Home Page

Deep Machine Learning has changed the computing paradigm. Products of today are built with machine intelligence as a central attribute, and consumers are beginning to expect near-human interaction with the appliances they use. However, much of the Deep Learning revolution has been limited to the cloud, enabled by popular toolkits such as Caffe, TensorFlow, and MxNet, and by specialized hardware such as TPUs. In comparison, mobile devices until recently were just not fast enough, there were limited developer tools, and there were limited use cases that required on-device machine learning. That has recently started to change, with the advances in real-time computer vision and spoken language understanding driving real innovation in intelligent mobile applications. Several mobile-optimized neural network libraries were recently announced (CoreML [1], Caffe2 for mobile [2], TensorFlow Lite [3]), which aim to dramatically reduce the barrier to entry for mobile machine learning. Innovation and competition at the silicon layer has enabled new possibilities for hardware acceleration. To make things even better, mobile-optimized versions of several state-of-the-art benchmark models were recently open sourced [4]. Widespread increase in availability of connected “smart” appliances for consumers and IoT platforms for industrial use cases means that there is an ever-expanding surface area for mobile intelligence and ambient devices in homes. All of these advances in combination imply that we are likely at the cusp of a rapid increase in research interest in on-device machine learning, and in particular, on-device neural computing.

Significant research challenges remain, however. Mobile devices are even more personal than “personal computers” were. Enabling machine learning while simultaneously preserving user trust requires ongoing advances in the research of differential privacy and federated learning techniques. On-device ML has to keep model size and power usage low while simultaneously optimizing for accuracy. There are a few exciting novel approaches recently being developed in mobile optimization of neural networks. Lastly, the newly prevalent use of camera and voice as interaction models has fueled exciting research towards neural techniques for image and speech/language understanding.

With this emerging interest as well as the wealth of challenging research problems in mind, we are proposing the first NIPS workshop dedicated to on-device machine learning for mobile and ambient home consumer devices. We believe that interest in this space is only going to increase, and we hope that the workshop plays the role of an influential catalyst to foster research and collaboration in this nascent community.

The next wave of ML applications will have significant processing on mobile and ambient devices. Some immediate examples of these are single-image depth estimation, object recognition and segmentation running on-device for creative effects, or on-device recommender and ranking systems for privacy-preserving, low-latency experiences. This workshop will bring ML practitioners up to speed on the latest trends for on-device applications of ML, offer an overview of the latest HW and SW framework developments, and champion active research towards hard technical challenges emerging in this nascent area. The target audience for the workshop is both industrial and academic researchers and practitioners of on-device, native machine learning. The workshop will cover both “informational” and “aspirational” aspects of this emerging research area for delivering ground-breaking experiences on real-world products.


08:05 AM Qualcomm presentation on ML-optimized mobile hardware (Talk)
Harris Teague
08:30 AM fpgaConvNet: A Toolflow for Mapping Diverse Convolutional Neural Networks on Embedded FPGAs (Talk)
Stylianos Venieris
08:45 AM High performance ultra-low-precision convolutions on mobile devices (Talk)
Andrew Tulloch, Yangqing Jia
09:00 AM Caffe2: Lessons from Running Deep Learning on the World’s Smart Phones (Talk)
Yangqing Jia
09:30 AM CoreML: High-Performance On-Device Inference (Talk)
Gaurav Kapoor
10:00 AM Data center to the edge: a journey with TensorFlow (Talk)
Rajat Monga
11:00 AM On-Device ML Frameworks (Panel Discussion)
Jeff Gehlhaar, Yangqing Jia, Rajat Monga
11:45 AM Poster Spotlight 1 (Spotlight)
12:05 PM Lunch (Break)
12:05 PM Poster Session 1 (Poster Session)
01:30 PM Federated learning for model training on decentralized data (Talk)
Daniel Ramage
02:00 PM Personalized and Private Peer-to-Peer Machine Learning (Talk)
Aurélien Bellet, Rachid Guerraoui, Marc Tommasi
02:15 PM SquishedNets: Squishing SqueezeNet further for edge device scenarios via deep evolutionary synthesis (Talk)
Francis Li
02:30 PM A Cascade Architecture for Keyword Spotting on Mobile Devices (Talk)
Raziel Alvarez, Chris Thornton, Mohammadali Ghodrat
02:45 PM Multiple-Instance, Cascaded Classification for Keyword Spotting in Narrow-Band Audio (Talk)
Ahmad Abdulkader, Kareem Nassar, Mohamed Mahmoud, Daniel Galvez
03:00 PM Coffee Break (Break)
03:30 PM Machine Learning for Alexa (Talk)
Arindam Mandal
04:00 PM Now Playing: Continuous low-power music recognition (Talk)
Marvin Ritter, Ruiqi Guo, Sanjiv Kumar, Julian J Odell, Mihajlo Velimirović, Dominik Roblek, James Lyon
04:15 PM Learning On-Device Conversational Models (Talk)
Sujith Ravi, Tom Rudick, Yicheng Fan
04:30 PM Google Lens (Talk)
Hartwig Adam
05:00 PM Poster Spotlight 2 (Spotlight)
05:20 PM Poster Session 2 (Poster Session)
Farhan Shafiq, Antonio Tomas Nevado Vilchez, Takato Yamada, Sakyasingha Dasgupta, Robin Geyer, Moin Nabi, Crefeda Rodrigues, Edoardo Manino, Alexander Serb, Miguel A. Carreira-Perpinan, Kar Wai Lim, Bryan Kian Hsiang Low, Rohit Pandey, Marie C White, Pavel Pidlypenskyi, Xue Wang, Christine Kaeser-Chen, Michael Zhu, Suyog Gupta, Sam Leroux