Timezone: »
The 2nd Workshop on Machine Learning on the Phone and other Consumer Devices (MLPCD 2) aims to continue the success of the 1st MLPCD workshop held at NIPS 2017 in Long Beach, CA.
Previously, the first MLPCD workshop edition, held at NIPS 2017 was successful, attracted over 200+ attendees and led to active research & panel discussions as well as follow-up contributions to the open-source community (e.g., release of new inference libraries, tools, models and standardized representations of deep learning models). We believe that interest in this space is only going to increase, and we hope that the workshop plays the role of an influential catalyst to foster research and collaboration in this nascent community.
After the first workshop where we investigated initial directions and trends, the NIPS 2018 MLPCD workshop focuses on theory and practical applications of on-device machine learning, an area that is highly relevant and specializes in the intersection of multiple topics of interest to NIPS and broader machine learning community -- efficient training & inference for deep learning and other machine learning models; interdisciplinary mobile applications involving vision, language & speech understanding; and emerging topics like Internet of Things.
We plan to incorporate several new additions this year -- inspirational opening Keynote talk on "future of intelligent assistive & wearable experiences"; two panels including a lively closing panel debate discussing pros/cons of two key ML computing paradigms (Cloud vs. On-device); solicited research papers on new & recent hot topics (e.g., theoretical & algorithmic work on low-precision models, compression, sparsity, etc. for training and inference), related challenges, applications and recent trends; demo session showcasing ML in action for real-world apps.
Description & Topics:
Deep learning and machine learning, in general, has changed the computing paradigm. Products of today are built with machine intelligence as a central attribute, and consumers are beginning to expect near-human interaction with the appliances they use. However, much of the Deep Learning revolution has been limited to the cloud, enabled by popular toolkits such as Caffe, TensorFlow, and MxNet, and by specialized hardware such as TPUs. In comparison, mobile devices until recently were just not fast enough, there were limited developer tools, and there were limited use cases that required on-device machine learning. That has recently started to change, with the advances in real-time computer vision and spoken language understanding driving real innovation in intelligent mobile applications. Several mobile-optimized neural network libraries were recently announced (CoreML, Caffe2 for mobile, TensorFlow Lite), which aim to dramatically reduce the barrier to entry for mobile machine learning. Innovation and competition at the silicon layer has enabled new possibilities for hardware acceleration. To make things even better, mobile-optimized versions of several state-of-the-art benchmark models were recently open sourced. Widespread increase in availability of connected “smart” appliances for consumers and IoT platforms for industrial use cases means that there is an ever-expanding surface area for mobile intelligence and ambient devices in homes. All of these advances in combination imply that we are likely at the cusp of a rapid increase in research interest in on-device machine learning, and in particular, on-device neural computing.
Significant research challenges remain, however. Mobile devices are even more personal than “personal computers” were. Enabling machine learning while simultaneously preserving user trust requires ongoing advances in the research of differential privacy and federated learning techniques. On-device ML has to keep model size and power usage low while simultaneously optimizing for accuracy. There are a few exciting novel approaches recently developed in mobile optimization of neural networks. Lastly, the newly prevalent use of camera and voice as interaction models has fueled exciting research towards neural techniques for image and speech/language understanding. This is an area that is highly relevant to multiple topics of interest to NIPS -- e.g., core topics like machine learning & efficient inference and interdisciplinary applications involving vision, language & speech understanding as well as emerging area (namely, Internet of Things).
With this emerging interest as well as the wealth of challenging research problems in mind, we are proposing the second NIPS 2018 workshop dedicated to on-device machine learning for mobile and ambient home consumer devices.
Areas/topics of interest include, but not limited to:
* Model compression for efficient inference with deep networks and other ML models
* Privacy preserving machine learning
* Low-precision training/inference & Hardware acceleration of neural computing on mobile devices
* Real-time mobile computer vision
* Language understanding and conversational assistants on mobile devices
* Speech recognition on mobile and smart home devices
* Machine intelligence for mobile gaming
* ML for mobile health other real-time prediction scenarios
* ML for on-device applications in the automotive industry (e.g., computer vision for self-driving cars)
* Software libraries (including open-source) optimized for on-device ML
Target Audience:
The next wave of ML applications will have significant processing on mobile and ambient devices. Some immediate examples of these are single-image classification, depth estimation, object recognition and segmentation running on-device for creative effects, or on-device recommender and ranking systems for privacy-preserving, low-latency experiences. This workshop will bring ML practitioners up to speed on the latest trends for on-device applications of ML, offer an overview of the latest HW and SW framework developments, and champion active research towards hard technical challenges emerging in this nascent area. The target audience for the workshop is both industrial and academic researchers and practitioners of on-device, native machine learning. The workshop will cover both “informational” and “aspirational” aspects of this emerging research area for delivering ground-breaking experiences on real-world products.
Given the relevance of the topic, target audience (mix of industry + academia & related parties) as well as the timing (confluence of research ideas + practical implementations both in industry as well as through publicly available toolkits ), we feel that NIPS 2018 would continue to be a great venue for this workshop.
Fri 5:15 a.m. - 5:30 a.m.
|
Opening (Chairs)
(
Opening
)
|
🔗 |
Fri 5:30 a.m. - 5:45 a.m.
|
Aurélien Bellet
(
Contributed talk
)
|
Aurélien Bellet 🔗 |
Fri 5:45 a.m. - 6:00 a.m.
|
Neel Guha
(
Contributed talk
)
|
Neel Guha 🔗 |
Fri 6:00 a.m. - 6:30 a.m.
|
Prof. Kurt Keutzer
(
Invited talk
)
|
Kurt Keutzer 🔗 |
Fri 6:30 a.m. - 6:45 a.m.
|
Ting-Wu Chin
(
Contributed talk
)
|
Ting-Wu Chin 🔗 |
Fri 6:45 a.m. - 7:30 a.m.
|
Prof. Thad Starner
(
Keynote talk
)
|
Thad Starner 🔗 |
Fri 7:30 a.m. - 8:00 a.m.
|
Coffee break (morning)
|
🔗 |
Fri 8:00 a.m. - 8:30 a.m.
|
Prof. Max Welling
(
Invited talk
)
|
Max Welling 🔗 |
Fri 8:30 a.m. - 8:40 a.m.
|
Zornitsa Kozareva
(
Contributed talk
)
|
Zornitsa Kozareva 🔗 |
Fri 8:50 a.m. - 10:30 a.m.
|
Spotlight (poster, demo), Lunch & Poster Session
(
Spotlight & Poster
)
|
Brijraj Singh · Philip Dow · Robert Dürichen · Paul Whatmough · Chen Feng · Arijit Patra · Shishir Patil · Eunjeong Jeong · Zhongqiu Lin · Yuki Izumi · Isabelle Leang · Mimee Xu · wenhan zhang · Sam Witteveen
|
Fri 10:30 a.m. - 11:00 a.m.
|
Brendan McMahan
(
Invited talk
)
|
Brendan McMahan 🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Prof. Virginia Smith
(
Invited talk
)
|
Virginia Smith 🔗 |
Fri 11:30 a.m. - 11:45 a.m.
|
Meghan Cowan
(
Contributed talk
)
|
Meghan Cowan 🔗 |
Fri 11:45 a.m. - 12:00 p.m.
|
Kuan Wang
(
Contributed talk
)
|
Kuan Wang 🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Coffee break (afternoon)
|
🔗 |
Fri 12:30 p.m. - 1:00 p.m.
|
Jan Kautz
(
Invited talk
)
|
Jan Kautz 🔗 |
Fri 1:00 p.m. - 1:30 p.m.
|
Prof. Song Han
(
Invited talk
)
|
Song Han 🔗 |
Fri 1:30 p.m. - 2:30 p.m.
|
Demo session
(
Demo
)
|
Sonam Damani · Philip Dow · Yuki Izumi · Shishir Patil · Isabelle Leang · Mimee Xu · wenhan zhang 🔗 |
Author Information
Sujith Ravi (Google Research)
Wei Chai (Google Inc)
Yangqing Jia (Facebook)
Hrishikesh Aradhye (Google, Inc)
Prateek Jain (Microsoft Research)
More from the Same Authors
-
2019 Poster: Provable Non-linear Inductive Matrix Completion »
Kai Zhong · Zhao Song · Prateek Jain · Inderjit Dhillon -
2019 Poster: Efficient Algorithms for Smooth Minimax Optimization »
Kiran Thekumparampil · Prateek Jain · Praneeth Netrapalli · Sewoong Oh -
2019 Poster: Shallow RNN: Accurate Time-series Classification on Resource Constrained Devices »
Don Dennis · Durmus Alp Emre Acar · Vikram Mandikal · Vinu Sankar Sadasivan · Venkatesh Saligrama · Harsha Vardhan Simhadri · Prateek Jain -
2018 Poster: Support Recovery for Orthogonal Matching Pursuit: Upper and Lower bounds »
Raghav Somani · Chirag Gupta · Prateek Jain · Praneeth Netrapalli -
2018 Spotlight: Support Recovery for Orthogonal Matching Pursuit: Upper and Lower bounds »
Raghav Somani · Chirag Gupta · Prateek Jain · Praneeth Netrapalli -
2018 Poster: FastGRNN: A Fast, Accurate, Stable and Tiny Kilobyte Sized Gated Recurrent Neural Network »
Aditya Kusupati · Manish Singh · Kush Bhatia · Ashish Kumar · Prateek Jain · Manik Varma -
2018 Poster: Collaborative Learning for Deep Neural Networks »
Guocong Song · Wei Chai -
2018 Poster: Multiple Instance Learning for Efficient Sequential Data Classification on Resource-constrained Devices »
Don Dennis · Chirag Pabbaraju · Harsha Vardhan Simhadri · Prateek Jain -
2017 : Learning On-Device Conversational Models »
Sujith Ravi · Tom Rudick · Yicheng Fan -
2017 Poster: Learning Mixture of Gaussians with Streaming Data »
Aditi Raghunathan · Prateek Jain · Ravishankar Krishnawamy -
2017 Poster: Consistent Robust Regression »
Kush Bhatia · Prateek Jain · Parameswaran Kamalaruban · Purushottam Kar -
2016 Workshop: Learning in High Dimensions with Structure »
Nikhil Rao · Prateek Jain · Hsiang-Fu Yu · Ming Yuan · Francis Bach -
2016 Poster: Regret Bounds for Non-decomposable Metrics with Missing Labels »
Nagarajan Natarajan · Prateek Jain -
2016 Poster: Structured Sparse Regression via Greedy Hard Thresholding »
Prateek Jain · Nikhil Rao · Inderjit Dhillon -
2016 Poster: Selective inference for group-sparse linear models »
Fan Yang · Rina Barber · Prateek Jain · John Lafferty -
2016 Poster: Mixed Linear Regression with Multiple Components »
Kai Zhong · Prateek Jain · Inderjit Dhillon -
2015 Poster: Robust Regression via Hard Thresholding »
Kush Bhatia · Prateek Jain · Purushottam Kar -
2015 Poster: Sparse Local Embeddings for Extreme Multi-label Classification »
Kush Bhatia · Himanshu Jain · Purushottam Kar · Manik Varma · Prateek Jain -
2015 Poster: Predtron: A Family of Online Algorithms for General Prediction Problems »
Prateek Jain · Nagarajan Natarajan · Ambuj Tewari -
2015 Poster: Alternating Minimization for Regression Problems with Vector-valued Outputs »
Prateek Jain · Ambuj Tewari -
2014 Poster: Non-convex Robust PCA »
Praneeth Netrapalli · Niranjan Uma Naresh · Sujay Sanghavi · Animashree Anandkumar · Prateek Jain -
2014 Poster: Provable Tensor Factorization with Missing Data »
Prateek Jain · Sewoong Oh -
2014 Spotlight: Non-convex Robust PCA »
Praneeth Netrapalli · Niranjan Uma Naresh · Sujay Sanghavi · Animashree Anandkumar · Prateek Jain -
2014 Poster: Provable Submodular Minimization using Wolfe's Algorithm »
Deeparnab Chakrabarty · Prateek Jain · Pravesh Kothari -
2014 Poster: Online and Stochastic Gradient Methods for Non-decomposable Loss Functions »
Purushottam Kar · Harikrishna Narasimhan · Prateek Jain -
2014 Oral: Provable Submodular Minimization using Wolfe's Algorithm »
Deeparnab Chakrabarty · Prateek Jain · Pravesh Kothari -
2014 Poster: On Iterative Hard Thresholding Methods for High-dimensional M-Estimation »
Prateek Jain · Ambuj Tewari · Purushottam Kar -
2013 Poster: Phase Retrieval using Alternating Minimization »
Praneeth Netrapalli · Prateek Jain · Sujay Sanghavi -
2013 Poster: Memory Limited, Streaming PCA »
Ioannis Mitliagkas · Constantine Caramanis · Prateek Jain -
2012 Poster: FastEx: Fast Clustering with Exponential Families »
Amr Ahmed · Sujith Ravi · Shravan M Narayanamurthy · Alexander Smola -
2012 Poster: Multilabel Classification using Bayesian Compressed Sensing »
Ashish Kapoor · Raajay Viswanathan · Prateek Jain -
2012 Poster: Supervised Learning with Similarity Functions »
Purushottam Kar · Prateek Jain -
2011 Poster: Orthogonal Matching Pursuit with Replacement »
Prateek Jain · Ambuj Tewari · Inderjit Dhillon -
2011 Poster: Similarity-based Learning via Data Driven Embeddings »
Purushottam Kar · Prateek Jain -
2010 Spotlight: Guaranteed Rank Minimization via Singular Value Projection »
Prateek Jain · Raghu Meka · Inderjit Dhillon -
2010 Poster: Guaranteed Rank Minimization via Singular Value Projection »
Prateek Jain · Raghu Meka · Inderjit Dhillon -
2010 Spotlight: Inductive Regularized Learning of Kernel Functions »
Prateek Jain · Brian Kulis · Inderjit Dhillon -
2010 Poster: Inductive Regularized Learning of Kernel Functions »
Prateek Jain · Brian Kulis · Inderjit Dhillon -
2010 Poster: Hashing Hyperplane Queries to Near Points with Applications to Large-Scale Active Learning »
Prateek Jain · Sudheendra Vijayanarasimhan · Kristen Grauman -
2009 Poster: Matrix Completion from Power-Law Distributed Samples »
Raghu Meka · Prateek Jain · Inderjit Dhillon -
2008 Poster: Online Metric Learning and Fast Similarity Search »
Prateek Jain · Brian Kulis · Inderjit Dhillon · Kristen Grauman -
2008 Oral: Online Metric Learning and Fast Similarity Search »
Prateek Jain · Brian Kulis · Inderjit Dhillon · Kristen Grauman