Program Highlights »
Workshop
Fri Dec 7th 08:00 AM -- 06:30 PM @ None
NIPS 2018 workshop on Compact Deep Neural Networks with industrial applications
Lixin Fan · Zhouchen Lin · Max Welling · Yurong Chen · Werner Bailer





Workshop Home Page

This workshop aims to bring together researchers, educators, practitioners who are interested in techniques as well as applications of making compact and efficient neural network representations. One main theme of the workshop discussion is to build up consensus in this rapidly developed field, and in particular, to establish close connection between researchers in Machine Learning community and engineers in industry. We believe the workshop is beneficial to both academic researchers as well as industrial practitioners.

===
News and announcements:

For authors of following accepted papers, please revise your submission as per reviewers comments to address raised issues. If there are too much contents to be included in 3 page limit, you may use appendix for supporting contents such as proofs or detailed experimental results. The camera ready abstract should be prepared with authors information (name, email address, affiliation) using the NIPS camera ready template.

Please submit the camera ready abstract through OpenReview (https://openreview.net/group?id=NIPS.cc/2018/Workshop/CDNNRIA) by Nov. 12th. Use your previous submission page to update the abstract. In case you have to postpone the submission, please inform us immeidately. Otherwise, the abstract will be removed from the workshop schedule.

===
We invite you to submit original work in, but not limited to, following areas:

Neural network compression techniques:
. Binarization, quantization, pruning, thresholding and coding of neural networks
. Efficient computation and acceleration of deep convolutional neural networks
. Deep neural network computation in low power consumption applications (e.g., mobile or IoT devices)
. Differentiable sparsification and quantization of deep neural networks
. Benchmarking of deep neural network compression techniques

Neural network representation and exchange:
. Exchange formats for (trained) neural networks
. Efficient deployment strategies for neural networks
. Industrial standardization of deep neural network representations
. Performance evaluation methods of compressed networks in application context (e.g., multimedia encoding and processing)

Video & media compression methods using DNNs such as those developed in MPEG group:
. To improve video coding standard development by using deep neural networks
. To increase practical applicability of network compression methods

An extended abstract (3 pages long using NIPS style, see https://nips.cc/Conferences/2018/PaperInformation/StyleFiles ) in PDF format should be submitted for evaluation of the originality and quality of the work. The evaluation is double-blind and the abstract must be anonymous. References may extend beyond the 3 page limit, and parallel submissions to a journal or conferences (e.g. AAAI or ICLR) are permitted.

Submissions will be accepted as contributed talks (oral) or poster presentations. Extended abstract should be submitted through OpenReview (https://openreview.net/group?id=NIPS.cc/2018/Workshop/CDNNRIA) by 20 Oct 2018.  All accepted abstracts will be posted on the workshop website and archived.  

Selection policy: all submitted abstracts will be evaluted based on their novelty, soundness and impacts. At the workshop we encourage DISCUSSION about NEW IDEAS, each submitter is thus expected to actively respond on OpenReview webpage and answer any questions about his/her ideas. The willingness to respond in OpenReview Q/A disucssions will be an important factor for the selection of accepted oral or poster presentations.

Important dates:
. Extended abstract submission deadline: 20 Oct 2018,
. Acceptance notification: 29 Oct. 2018,
. Camera ready submission: 12 November 2018,
. Workshop: 7 December 2018

Submission:
Please submit ​your extended abstract through OpenReivew system (https://openreview.net/group?id=NIPS.cc/2018/Workshop/CDNNRIA).
For prospective authors: please send author information to workshop chairs (lixin.fan@nokia.com), so that you submission can be assigned to reviewers without conflict of interests.
. Reviewers comments will be released by Oct. 24th, then authors have to reply by Oct. 27th, which leaving us two days for decision-making.
. It is highly recommended for authors submit abstracts early, in case you need more time to address reviewers' comments.

NIPS Complimentary workshop registration
We will help authors of accepted submissions to get access to a reserve pool of NIPS tickets. So please register to the workshop early.


===
Accepted papers & authors:

1. Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters,
Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato

2. Neural Network Compression using Transform Coding and Clustering,
Thorsten Laude, Jörn Ostermann

3. Pruning neural networks: is it time to nip it in the bud?,
Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle

4. Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition,
Yu Pan, Jing Xu, Maolin Wang, Fei Wang, Kun Bai, Zenglin Xu

5. Efficient Inference on Deep Neural Networks by Dynamic Representations and Decision Gates,
Mohammad Saeed Shafiee, Mohammad Javad Shafiee, Alexander Wong

6. Iteratively Training Look-Up Tables for Network Quantization,
Fabien Cardinaux, Stefan Uhlich, Kazuki Yoshiyama, Javier Alonso García, Stephen Tiedemann, Thomas Kemp, Akira Nakamura

7. Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices,
Xiaofan Xu, Mi Sun Park, Cormac Brick

8. Compression of Acoustic Event Detection Models with Low-rank Matrix Factorization and Quantization Training,
Bowen Shi, Ming Sun, Chieh-Chi Kao, Viktor Rozgic, Spyros Matsoukas, Chao Wang

9. On Learning Wire-Length Efficient Neural Networks,
Christopher Blake, Luyu Wang, Giuseppe Castiglione, Christopher Srinavasa, Marcus Brubaker

10. FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks,
Raphael Tang, Ashutosh Adhikari, Jimmy Lin

11. Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method,
Yuxin Zhang, Huan Wang, Yang Luo, Roland Hu

12. Differentiable Training for Hardware Efficient LightNNs,
Ruizhou Ding, Zeye Liu, Ting-Wu Chin, Diana Marculescu, R.D. (Shawn) Blanton

13. Structured Pruning for Efficient ConvNets via Incremental Regularization,
Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu

14. Block-wise Intermediate Representation Training for Model Compression,
Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zahaira

15. Targeted Dropout,
Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, Geoffrey E. Hinton

16. Adaptive Mixture of Low-Rank Factorizations for Compact Neural Modeling,
Ting Chen, Ji Lin, Tian Lin, Song Han, Chong Wang, Denny Zhou

17. Differentiable Fine-grained Quantization for Deep Neural Network Compression,
Hsin-Pai Cheng, Yuanjun Huang, Xuyang Guo, Yifei Huang, Feng Yan, Hai Li, Yiran Chen

18. Transformer to CNN: Label-scarce distillation for efficient text classification,
Yew Ken Chia, Sam Witteveen, Martin Andrews

19. EnergyNet: Energy-Efficient Dynamic Inference,
Yue Wang, Tan Nguyen, Yang Zhao, Zhangyang Wang, Yingyan Lin, Richard Baraniuk

20. Recurrent Convolutions: A Model Compression Point of View,
Zhendong Zhang, Cheolkon Jung

21, Rethinking the Value of Network Pruning,
Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, Trevor Darrell

22. Linear Backprop in non-linear networks,
Mehrdad Yazdani

23. Bayesian Sparsification of Gated Recurrent Neural Networks,
Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov

24. Demystifying Neural Network Filter Pruning,
Zhuwei Qin, Fuxun Yu, Chenchen Liu, Xiang Chen

25. Learning Compact Networks via Adaptive Network Regularization,
Sivaramakrishnan Sankarapandian, Anil Kag, Rachel Manzelli, Brian Kulis

26. Pruning at a Glance: A Structured Class-Blind Pruning Technique for Model Compression
Abdullah Salama, Oleksiy Ostapenko, Moin Nabi, Tassilo Klein

27. Succinct Source Coding of Deep Neural Networks
Sourya Basu, Lav R. Varshney

28. Fast On-the-fly Retraining-free Sparsification of Convolutional Neural Networks
Amir H. Ashouri, Tarek Abdelrahman, Alwyn Dos Remedios

29. PocketFlow: An Automated Framework for Compressing and Accelerating Deep Neural Networks
Jiaxiang Wu, Yao Zhang, Haoli Bai, Huasong Zhong, Jinlong Hou, Wei Liu, Junzhou Huang

30. Universal Deep Neural Network Compression
Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

31. Compact and Computationally Efficient Representations of Deep Neural Networks
Simon Wiedemann, Klaus-Robert Mueller, Wojciech Samek

32. Dynamic parameter reallocation improves trainability of deep convolutional networks
Hesham Mostafa, Xin Wang

33. Compact Neural Network Solutions to Laplace's Equation in a Nanofluidic Device
Martin Magill, Faisal Z. Qureshi, Hendrick W. de Haan

34. Distilling Critical Paths in Convolutional Neural Networks
Fuxun Yu, Zhuwei Qin, Xiang Chen

35. SeCSeq: Semantic Coding for Sequence-to-Sequence based Extreme Multi-label Classification
Wei-Cheng Chang, Hsiang-Fu Yu, Inderjit S. Dhillon, Yiming Yang

===
A best paper award will be presented to the contribution selected by reviewers, who will also take into account active disucssions on OpenReview. One FREE NIPS ticket will be awarded to the best paper presenter.

The best paper award is given to the authors of "Rethinking the Value of Network Pruning",
Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, Trevor Darrell

=====
Acknowledgement to reviewers

The workshop organizers gratefully acknowledge the assistance of the following people, who reviewed submissions and actively disucssed with authors:

Zhuang Liu, Ting-Wu Chin, Fuxun Yu, Huan Wang, Mehrdad Yazdani, Qigong Sun, Tim Genewein, Abdullah Salama, Anbang Yao, Chen Xu, Hao Li, Jiaxiang Wu, Zhisheng Zhong, Haoji Hu, Hesham Mostafa, Seunghyeon Kim, Xin Wang, Yiwen Guo, Yu Pan, Fereshteh Lagzi, Martin Magill, Wei-Cheng Chang, Yue Wang, Caglar Aytekin, Hannes Fassold, Martin Winter, Yunhe Wang, Faisal Qureshi, Filip Korzeniowski, jianguo Li, Jiashi Feng, Mingjie Sun, Shiqi Wang, Tinghuai Wang, Xiangyu Zhang, Yibo Yang, Ziqian Chen, Francesco Cricri, Jan Schlüter, Jing Xu, Lingyu Duan, Maoin Wang, Naiyan Wang, Stephen Tyree, Tianshui Chen, Vasileios Mezaris, Christopher Blake, Chris Srinivasa, Giuseppe Castiglione, Amir Khoshamam, Kevin Luk, Luyu Wang, Jian Cheng, Pavlo Molchanov, Yihui He, Sam Witteveen, Peng Wang,

with special thanks to Ting-Wu Chin who contributed 7 reviewer comments.

=====

Workshop schedule on December 7th, 2018:

09:00 AM Opening and Introduction (Talk)
09:05 AM Rethinking the Value of Network Pruning (Oral presentation)
Zhuang Liu
09:30 AM Bandwidth efficient deep learning by model compression (Invited talk)
Song Han
09:55 AM Neural network compression in the wild: why aiming for high compression factors is not enough (Invited talk)
Tim Genewein
10:20 AM Linear Backprop in non-linear networks (Oral presentation)
Mehrdad Yazdani
10:45 AM Coffee break (morning) (break)
11:00 AM Network compression via differentiable pruning and quantization (Invited talk)
Max Welling
11:25 AM Deep neural networks for multimedia processing, coding and standardization (Invited talk)
Shan Liu
11:50 AM Bayesian Sparsification of Gated Recurrent Neural Networks (Oral presentation)
Nadia Chirkova Chirkova
12:15 PM Lunch break (on your own) (break)
02:00 PM Efficient Computation of Deep Convolutional Neural Networks: A Quantization Perspective (Invited talk)
Jian Cheng
02:25 PM Deep neural network compression and acceleration (Invited talk)
Anbang Yao
02:50 PM Poster spotlight session. (Spotlight presentation)
Abdullah Salama, Wei-Cheng Chang, Aidan Gomez, Raphael Tang, FUXUN YU, Zhendong Zhang, Yuxin Zhang, Ji Lin, Stephen Tiedemann, Kun Bai, Siva Sankarapandian, Marton Havasi, Jack Turner, Dave Cheng, Yue Wang, Xiaofan Xu, Ruizhou Ding, Haoji Hu, Mohammad Shafiee, Christopher Blake, Chieh-Chi Kao, Daniel Kang, Ken Chia, Amir Ashouri, Sourya Basu, Simon Wiedemann, Thorsten Laude
03:20 PM Coffee break (afternoon) (break)
03:30 PM Poster presentations (Poster session)
Simon Wiedemann, Tonny Wang, Ivan Zhang, Chong Wang, Mohammad Javad Shafiee, Rachel Manzelli, Wenbing Huang, Tassilo Klein
04:30 PM Panel disucssion
Max Welling, Tim Genewein, Edwin Park, Cormac Brick
05:30 PM Challenges and lessons learned in DNN portability in production (Invited talk)
Joohoon Lee
05:55 PM Closing (Talk)