`

( events)   Timezone: »  
Workshop
Sat Dec 09 08:00 AM -- 06:30 PM (PST) @ 102 A+B
Machine Learning on the Phone and other Consumer Devices
Hrishikesh Aradhye · Joaquin Quinonero Candela · Rohit Prasad





Workshop Home Page

Deep Machine Learning has changed the computing paradigm. Products of today are built with machine intelligence as a central attribute, and consumers are beginning to expect near-human interaction with the appliances they use. However, much of the Deep Learning revolution has been limited to the cloud, enabled by popular toolkits such as Caffe, TensorFlow, and MxNet, and by specialized hardware such as TPUs. In comparison, mobile devices until recently were just not fast enough, there were limited developer tools, and there were limited use cases that required on-device machine learning. That has recently started to change, with the advances in real-time computer vision and spoken language understanding driving real innovation in intelligent mobile applications. Several mobile-optimized neural network libraries were recently announced (CoreML [1], Caffe2 for mobile [2], TensorFlow Lite [3]), which aim to dramatically reduce the barrier to entry for mobile machine learning. Innovation and competition at the silicon layer has enabled new possibilities for hardware acceleration. To make things even better, mobile-optimized versions of several state-of-the-art benchmark models were recently open sourced [4]. Widespread increase in availability of connected “smart” appliances for consumers and IoT platforms for industrial use cases means that there is an ever-expanding surface area for mobile intelligence and ambient devices in homes. All of these advances in combination imply that we are likely at the cusp of a rapid increase in research interest in on-device machine learning, and in particular, on-device neural computing.

Significant research challenges remain, however. Mobile devices are even more personal than “personal computers” were. Enabling machine learning while simultaneously preserving user trust requires ongoing advances in the research of differential privacy and federated learning techniques. On-device ML has to keep model size and power usage low while simultaneously optimizing for accuracy. There are a few exciting novel approaches recently being developed in mobile optimization of neural networks. Lastly, the newly prevalent use of camera and voice as interaction models has fueled exciting research towards neural techniques for image and speech/language understanding.

With this emerging interest as well as the wealth of challenging research problems in mind, we are proposing the first NIPS workshop dedicated to on-device machine learning for mobile and ambient home consumer devices. We believe that interest in this space is only going to increase, and we hope that the workshop plays the role of an influential catalyst to foster research and collaboration in this nascent community.

The next wave of ML applications will have significant processing on mobile and ambient devices. Some immediate examples of these are single-image depth estimation, object recognition and segmentation running on-device for creative effects, or on-device recommender and ranking systems for privacy-preserving, low-latency experiences. This workshop will bring ML practitioners up to speed on the latest trends for on-device applications of ML, offer an overview of the latest HW and SW framework developments, and champion active research towards hard technical challenges emerging in this nascent area. The target audience for the workshop is both industrial and academic researchers and practitioners of on-device, native machine learning. The workshop will cover both “informational” and “aspirational” aspects of this emerging research area for delivering ground-breaking experiences on real-world products.


[1] https://developer.apple.com/machine-learning/
[2] https://caffe2.ai/
[3] https://www.tensorflow.org/mobile/
[4] https://opensource.googleblog.com/2017/06/mobilenets-open-source-models-for.html

Qualcomm presentation on ML-optimized mobile hardware (Talk)
fpgaConvNet: A Toolflow for Mapping Diverse Convolutional Neural Networks on Embedded FPGAs (Talk)
High performance ultra-low-precision convolutions on mobile devices (Talk)
Caffe2: Lessons from Running Deep Learning on the World’s Smart Phones (Talk)
CoreML: High-Performance On-Device Inference (Talk)
Data center to the edge: a journey with TensorFlow (Talk)
On-Device ML Frameworks (Panel Discussion)
Poster Spotlight 1 (Spotlight)
Lunch (Break)
Poster Session 1 (Poster Session)
Federated learning for model training on decentralized data (Talk)
Personalized and Private Peer-to-Peer Machine Learning (Talk)
SquishedNets: Squishing SqueezeNet further for edge device scenarios via deep evolutionary synthesis (Talk)
A Cascade Architecture for Keyword Spotting on Mobile Devices (Talk)
Multiple-Instance, Cascaded Classification for Keyword Spotting in Narrow-Band Audio (Talk)
Coffee Break (Break)
Machine Learning for Alexa (Talk)
Now Playing: Continuous low-power music recognition (Talk)
Learning On-Device Conversational Models (Talk)
Google Lens (Talk)
Poster Spotlight 2 (Spotlight)
Poster Session 2 (Poster Session)