Timezone: »
This workshop aims to bring together researchers, educators, practitioners who are interested in techniques as well as applications of making compact and efficient neural network representations. One main theme of the workshop discussion is to build up consensus in this rapidly developed field, and in particular, to establish close connection between researchers in Machine Learning community and engineers in industry. We believe the workshop is beneficial to both academic researchers as well as industrial practitioners.
===
News and announcements:
. For authors of spotlight posters, please send your one-minute slides (preferably with recorded narrative) to lixin.fan01@gmail.com, or copy it to a UPS stick. See you at the workshop then.
. Please note the change of workshop schedule. Due to visa issues, some speakers are unfortunately unable to attend the workshop.
. There are some reserve NIPS/NeurIPS tickets available now, on a first come first serve basis, for co-authors of workshop accepted papers! Please create NIPS acocunts, and inform us the email addresses if reserve tickets are needed.
. For authors included in the spot light session, please prepare short slides with presentation time stictly within 1 minute. It is preferably to record your presentation with audio & video (as instructed e.g. at https://support.office.com/en-us/article/record-a-slide-show-with-narration-and-slide-timings-0b9502c6-5f6c-40ae-b1e7-e47d8741161c?ui=en-US&rs=en-US&ad=US#OfficeVersion=Windows).
. For authors included in the spot light session, please also prepare a poster for your paper, and make sure either yourself or your co-authors will present the poster after the coffee break.
. Please make your poster 36W x 48H inches or 90 x 122 cm. Make sure your poster is in portrait orientation and does not exceed the maximal size, since we have limited space for the poster session.
===
For authors of following accepted papers, please revise your submission as per reviewers comments to address raised issues. If there are too much contents to be included in 3 page limit, you may use appendix for supporting contents such as proofs or detailed experimental results. The camera ready abstract should be prepared with authors information (name, email address, affiliation) using the NIPS camera ready template.
Please submit the camera ready abstract through OpenReview (https://openreview.net/group?id=NIPS.cc/2018/Workshop/CDNNRIA) by Nov. 12th. Use your previous submission page to update the abstract. In case you have to postpone the submission, please inform us immeidately. Otherwise, the abstract will be removed from the workshop schedule.
===
We invite you to submit original work in, but not limited to, following areas:
Neural network compression techniques:
. Binarization, quantization, pruning, thresholding and coding of neural networks
. Efficient computation and acceleration of deep convolutional neural networks
. Deep neural network computation in low power consumption applications (e.g., mobile or IoT devices)
. Differentiable sparsification and quantization of deep neural networks
. Benchmarking of deep neural network compression techniques
Neural network representation and exchange:
. Exchange formats for (trained) neural networks
. Efficient deployment strategies for neural networks
. Industrial standardization of deep neural network representations
. Performance evaluation methods of compressed networks in application context (e.g., multimedia encoding and processing)
Video & media compression methods using DNNs such as those developed in MPEG group:
. To improve video coding standard development by using deep neural networks
. To increase practical applicability of network compression methods
An extended abstract (3 pages long using NIPS style, see https://nips.cc/Conferences/2018/PaperInformation/StyleFiles ) in PDF format should be submitted for evaluation of the originality and quality of the work. The evaluation is double-blind and the abstract must be anonymous. References may extend beyond the 3 page limit, and parallel submissions to a journal or conferences (e.g. AAAI or ICLR) are permitted.
Submissions will be accepted as contributed talks (oral) or poster presentations. Extended abstract should be submitted through OpenReview (https://openreview.net/group?id=NIPS.cc/2018/Workshop/CDNNRIA) by 20 Oct 2018. All accepted abstracts will be posted on the workshop website and archived.
Selection policy: all submitted abstracts will be evaluted based on their novelty, soundness and impacts. At the workshop we encourage DISCUSSION about NEW IDEAS, each submitter is thus expected to actively respond on OpenReview webpage and answer any questions about his/her ideas. The willingness to respond in OpenReview Q/A disucssions will be an important factor for the selection of accepted oral or poster presentations.
Important dates:
. Extended abstract submission deadline: 20 Oct 2018,
. Acceptance notification: 29 Oct. 2018,
. Camera ready submission: 12 November 2018,
. Workshop: 7 December 2018
Submission:
Please submit your extended abstract through OpenReivew system (https://openreview.net/group?id=NIPS.cc/2018/Workshop/CDNNRIA).
For prospective authors: please send author information to workshop chairs (lixin.fan@nokia.com), so that you submission can be assigned to reviewers without conflict of interests.
. Reviewers comments will be released by Oct. 24th, then authors have to reply by Oct. 27th, which leaving us two days for decision-making.
. It is highly recommended for authors submit abstracts early, in case you need more time to address reviewers' comments.
NIPS Complimentary workshop registration
We will help authors of accepted submissions to get access to a reserve pool of NIPS tickets. So please register to the workshop early.
===
Accepted papers & authors:
1. Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters,
Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato
2. Neural Network Compression using Transform Coding and Clustering,
Thorsten Laude, Jörn Ostermann
3. Pruning neural networks: is it time to nip it in the bud?,
Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle
4. Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition,
Yu Pan, Jing Xu, Maolin Wang, Fei Wang, Kun Bai, Zenglin Xu
5. Efficient Inference on Deep Neural Networks by Dynamic Representations and Decision Gates,
Mohammad Saeed Shafiee, Mohammad Javad Shafiee, Alexander Wong
6. Iteratively Training Look-Up Tables for Network Quantization,
Fabien Cardinaux, Stefan Uhlich, Kazuki Yoshiyama, Javier Alonso García, Stephen Tiedemann, Thomas Kemp, Akira Nakamura
7. Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices,
Xiaofan Xu, Mi Sun Park, Cormac Brick
8. Compression of Acoustic Event Detection Models with Low-rank Matrix Factorization and Quantization Training,
Bowen Shi, Ming Sun, Chieh-Chi Kao, Viktor Rozgic, Spyros Matsoukas, Chao Wang
9. On Learning Wire-Length Efficient Neural Networks,
Christopher Blake, Luyu Wang, Giuseppe Castiglione, Christopher Srinavasa, Marcus Brubaker
10. FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks,
Raphael Tang, Ashutosh Adhikari, Jimmy Lin
11. Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method,
Yuxin Zhang, Huan Wang, Yang Luo, Roland Hu
12. Differentiable Training for Hardware Efficient LightNNs,
Ruizhou Ding, Zeye Liu, Ting-Wu Chin, Diana Marculescu, R.D. (Shawn) Blanton
13. Structured Pruning for Efficient ConvNets via Incremental Regularization,
Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu
14. Block-wise Intermediate Representation Training for Model Compression,
Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zahaira
15. Targeted Dropout,
Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, Geoffrey E. Hinton
16. Adaptive Mixture of Low-Rank Factorizations for Compact Neural Modeling,
Ting Chen, Ji Lin, Tian Lin, Song Han, Chong Wang, Denny Zhou
17. Differentiable Fine-grained Quantization for Deep Neural Network Compression,
Hsin-Pai Cheng, Yuanjun Huang, Xuyang Guo, Yifei Huang, Feng Yan, Hai Li, Yiran Chen
18. Transformer to CNN: Label-scarce distillation for efficient text classification,
Yew Ken Chia, Sam Witteveen, Martin Andrews
19. EnergyNet: Energy-Efficient Dynamic Inference,
Yue Wang, Tan Nguyen, Yang Zhao, Zhangyang Wang, Yingyan Lin, Richard Baraniuk
20. Recurrent Convolutions: A Model Compression Point of View,
Zhendong Zhang, Cheolkon Jung
21, Rethinking the Value of Network Pruning,
Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, Trevor Darrell
22. Linear Backprop in non-linear networks,
Mehrdad Yazdani
23. Bayesian Sparsification of Gated Recurrent Neural Networks,
Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov
24. Demystifying Neural Network Filter Pruning,
Zhuwei Qin, Fuxun Yu, Chenchen Liu, Xiang Chen
25. Learning Compact Networks via Adaptive Network Regularization,
Sivaramakrishnan Sankarapandian, Anil Kag, Rachel Manzelli, Brian Kulis
26. Pruning at a Glance: A Structured Class-Blind Pruning Technique for Model Compression
Abdullah Salama, Oleksiy Ostapenko, Moin Nabi, Tassilo Klein
27. Succinct Source Coding of Deep Neural Networks
Sourya Basu, Lav R. Varshney
28. Fast On-the-fly Retraining-free Sparsification of Convolutional Neural Networks
Amir H. Ashouri, Tarek Abdelrahman, Alwyn Dos Remedios
29. PocketFlow: An Automated Framework for Compressing and Accelerating Deep Neural Networks
Jiaxiang Wu, Yao Zhang, Haoli Bai, Huasong Zhong, Jinlong Hou, Wei Liu, Junzhou Huang
30. Universal Deep Neural Network Compression
Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
31. Compact and Computationally Efficient Representations of Deep Neural Networks
Simon Wiedemann, Klaus-Robert Mueller, Wojciech Samek
32. Dynamic parameter reallocation improves trainability of deep convolutional networks
Hesham Mostafa, Xin Wang
33. Compact Neural Network Solutions to Laplace's Equation in a Nanofluidic Device
Martin Magill, Faisal Z. Qureshi, Hendrick W. de Haan
34. Distilling Critical Paths in Convolutional Neural Networks
Fuxun Yu, Zhuwei Qin, Xiang Chen
35. SeCSeq: Semantic Coding for Sequence-to-Sequence based Extreme Multi-label Classification
Wei-Cheng Chang, Hsiang-Fu Yu, Inderjit S. Dhillon, Yiming Yang
===
A best paper award will be presented to the contribution selected by reviewers, who will also take into account active disucssions on OpenReview. One FREE NIPS ticket will be awarded to the best paper presenter.
The best paper award is given to the authors of "Rethinking the Value of Network Pruning",
Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, Trevor Darrell
=====
Acknowledgement to reviewers
The workshop organizers gratefully acknowledge the assistance of the following people, who reviewed submissions and actively disucssed with authors:
Zhuang Liu, Ting-Wu Chin, Fuxun Yu, Huan Wang, Mehrdad Yazdani, Qigong Sun, Tim Genewein, Abdullah Salama, Anbang Yao, Chen Xu, Hao Li, Jiaxiang Wu, Zhisheng Zhong, Haoji Hu, Hesham Mostafa, Seunghyeon Kim, Xin Wang, Yiwen Guo, Yu Pan, Fereshteh Lagzi, Martin Magill, Wei-Cheng Chang, Yue Wang, Caglar Aytekin, Hannes Fassold, Martin Winter, Yunhe Wang, Faisal Qureshi, Filip Korzeniowski, jianguo Li, Jiashi Feng, Mingjie Sun, Shiqi Wang, Tinghuai Wang, Xiangyu Zhang, Yibo Yang, Ziqian Chen, Francesco Cricri, Jan Schlüter, Jing Xu, Lingyu Duan, Maoin Wang, Naiyan Wang, Stephen Tyree, Tianshui Chen, Vasileios Mezaris, Christopher Blake, Chris Srinivasa, Giuseppe Castiglione, Amir Khoshamam, Kevin Luk, Luyu Wang, Jian Cheng, Pavlo Molchanov, Yihui He, Sam Witteveen, Peng Wang,
with special thanks to Ting-Wu Chin who contributed 7 reviewer comments.
=====
Workshop meeting room: 517B
Workshop schedule on December 7th, 2018:
Fri 6:00 a.m. - 6:05 a.m.
|
Opening and Introduction
(
Talk
)
|
🔗 |
Fri 6:05 a.m. - 6:30 a.m.
|
Rethinking the Value of Network Pruning
(
Oral presentation
)
|
Zhuang Liu 🔗 |
Fri 6:30 a.m. - 6:50 a.m.
|
Bandwidth efficient deep learning by model compression
(
Invited talk
)
n the post-ImageNet era, computer vision and machine learning researchers are solving more complicated AI problems using larger datasets driving the demand for more computation. However, we are in the post-Moore’s Law world where the amount of computation per unit cost and power is no longer increasing at its historic rate. This mismatch between supply and demand for computation highlights the need for co-designing efficient algorithms and hardware. In this talk, I will talk about bandwidth efficient deep learning by model compression, together with efficient hardware architecture support, saving memory bandwidth, networking bandwidth, and engineer bandwidth. |
Song Han 🔗 |
Fri 6:55 a.m. - 7:20 a.m.
|
Neural network compression in the wild: why aiming for high compression factors is not enough
(
Invited talk
)
Abstract: the widespread use of state-of-the-art deep neural network models in the mobile, automotive and embedded domains is often hindered by the steep computational resources that are required for running such models. However, the recent scientific literature proposes a plethora of of ways to alleviate the problem, either on the level of efficient network architectures, efficiency-optimized hardware or via network compression methods. Unfortunately, the usefulness of a network compression method strongly depends on the other aspects (network architecture and target hardware) as well as the task itself (classification, regression, detection, etc.), but very few publications consider this interplay. This talk highlights some of the issues that arise from the strong interplay between network architecture, target hardware, compression algorithm and target task. Additionally some shortcomings in the current literature on network compression methods are pointed-out, such as incomparability of results (different base-line networks, different training-/data-augmentation schemes, etc.), lack of results on tasks other than classification, or use of very different (and perhaps not very informative) quantitative performance indicators such as naive compression rate, operations-per-second, size of stored weight matrices, etc. The talk concludes by proposing some guidelines and best-practices for increasing practical applicability of network compression methods and a call for standardizing network compression benchmarks. |
Tim Genewein 🔗 |
Fri 7:20 a.m. - 7:45 a.m.
|
Linear Backprop in non-linear networks
(
Oral presentation
)
|
Mehrdad Yazdani 🔗 |
Fri 7:45 a.m. - 8:00 a.m.
|
Coffee break (morning)
|
🔗 |
Fri 8:00 a.m. - 8:25 a.m.
|
Challenges and lessons learned in DNN portability in production
(
Invited talk
)
Deploying state-of-the-art deep neural networks into high-performance production system comes with many challenges. There is a plethora of deep learning frameworks with different operator designs and model format. As a deployment platform developer, having a portable model format to parse, instead of developing parsers for every single framework seems very attractive. As a pioneer in the deep learning inference platform, NVIDIA TensorRT introduced UFF as a proposed solution last year, and now there are more exchange format available such as ONNX and NNEF. In this talk, we will share the lessons learned from the TensorRT use cases in various production environment working with a portable format, with consideration of optimizations such as pruning, quantization, and auto-tuning on different target accelerators. We will also discuss some of the open challenges. |
Joohoon Lee 🔗 |
Fri 8:30 a.m. - 9:15 a.m.
|
Bayesian Sparsification of Gated Recurrent Neural Networks
(
Oral presentation
)
|
Nadezhda Chirkova 🔗 |
Fri 9:00 a.m. - 11:00 a.m.
|
Lunch break (on your own)
|
🔗 |
Fri 11:00 a.m. - 11:25 a.m.
|
Efficient Computation of Deep Convolutional Neural Networks: A Quantization Perspective
(
Invited talk
)
Abstract: neural network compression has become an important research area due to its great impact on deployment of large models on resource constrained devices. In this talk, we will introduce two novel techniques that allow for differentiable sparsification and quantization of deep neural networks; both of these are achieved via appropriate smoothing of the overall objective. As a result, we can directly train architectures to be highly compressed and hardware-friendly via off-the-self stochastic gradient descent optimizers. |
Max Welling 🔗 |
Fri 11:25 a.m. - 11:50 a.m.
|
Deep neural network compression and acceleration
(
Invited talk
)
In the past several years, Deep Neural Networks (DNNs) have demonstrated record-breaking accuracy on a variety of artificial intelligence tasks. However, the intensive storage and computational costs of DNN models make it difficult to deploy them on the mobile and embedded systems for real-time applications. In this technical talk, Dr. Yao will introduce their recent works on deep neural network compression and acceleration, showing how they achieve impressive compression performance without noticeable loss of model prediction accuracy, from the perspective of pruning and quantization. |
Anbang Yao 🔗 |
Fri 11:50 a.m. - 12:20 p.m.
|
Poster spotlight session.
(
Spotlight presentation
)
|
Abdullah Salama · Wei-Cheng Chang · Aidan Gomez · Raphael Tang · FUXUN YU · Zhendong Zhang · Yuxin Zhang · Ji Lin · Stephen Tiedemann · Kun Bai · Sivaramakrishnan Sankarapandian · Marton Havasi · Jack Turner · Hsin-Pai Cheng · Yue Wang · Xiaofan Xu · Ruizhou Ding · Haoji Hu · Mohammad Shafiee · Christopher Blake · Chieh-Chi Kao · Daniel Kang · Yew Ken Chia · Amir Ashouri · Sourya Basu · Simon Wiedemann · Thorsten Laude
|
Fri 12:20 p.m. - 12:30 p.m.
|
Coffee break (afternoon)
|
🔗 |
Fri 12:30 p.m. - 1:30 p.m.
|
Poster presentations
(
Poster session
)
|
Simon Wiedemann · Huan Wang · Ivan Zhang · Chong Wang · Mohammad Javad Shafiee · Rachel Manzelli · Wenbing Huang · Tassilo Klein · Lifu Zhang · Ashutosh Adhikari · Faisal Qureshi · Giuseppe Castiglione
|
Fri 1:45 p.m. - 2:45 p.m.
|
Panel disucssion
|
Max Welling · Tim Genewein · Edwin Park · Song Han 🔗 |
Fri 2:45 p.m. - 3:00 p.m.
|
Closing
(
Talk
)
|
🔗 |
Author Information
Lixin Fan (Nokia Technologies)
Dr. Lixin Fan is a Principal Scientist affiliated with WeBank, China. His research areas of interests include machine learning & deep learning, computer vision & pattern recognition, image and video processing, 3D big data processing, data visualization & rendering, augmented and virtual reality, mobile ubiquitous and pervasive computing, and intelligent human-computer interface. Dr. Fan is the (co-)author of more than 60 international journal & conference publications. He also (co-)invented more than a hundred granted and pending patents filed in US, Europe, and China. Before joining WeBank, Dr. Fan was affiliated with Nokia Technologies and Xerox Research Center Europe (XRCE). His research work included the well-recognized bag of key-points method for image categorization.
Zhouchen Lin (Peking University)
Max Welling (University of Amsterdam / Qualcomm AI Research)
Yurong Chen (Intel Labs China)
Dr. Yurong Chen is a Principal Research Scientist and Sr. Research Director at Intel Corporation, and Director of Cognitive Computing Lab at Intel Labs China. Currently, he’s responsible for leading cutting-edge Visual Cognition and Machine Learning research for Intel smart computing and driving research innovation in smart visual data processing technologies on Intel platforms across Intel Labs. He drove the research and development of Deep Learning (DL) based Visual Understanding (VU) and leading Face Analysis technologies to impact Intel architectures/platforms and delivered core technologies to help differentiate Intel products including Intel Movidius VPU, RealSense SDK, CV SDK, OpenVINO, Unite, IOT video E2E analytics solutions and client apps. His team also delivered core AI technologies such as 3D face technology and tiger Re-ID for “Chris Lee World’s First AI Music Video”, “The Great Wall Restoration” and “Saving Amur Tigers” to promote Intel AI leadership. Meanwhile, his team won and achieved top rankings in many international visual challenges' tasks including image matching and multi-view reconstruction (CVPR 2019), adversarial vision (NeurIPS 2018/17), multimodal emotion recognition (ACM ICMI EmotiW 2017/16/15), object detection (MS COCO 2017), visual question answering (VQA 2017), video description (MSR-VTT 2016), etc. He led the team to win Intel China Award (Top team award of Intel China) 2016, Intel Labs Academic Awards (Top award of Intel labs) – Gordy Award 2016, 2015 and 2014 for outstanding research achievements on DL based VU, Multimodal Emotion Recognition and Advanced Visual Analytics. He has published over 60 technical papers (in CVPR, ICCV, ECCV, TPAMI, IJCV, NeurIPS, ICLR, IJCAI, IEEE Micro, etc.) and holds 50+ issued/pending US/PCT patents. Dr. Chen joined Intel in 2004 after finishing his postdoctoral research in the Institute of Software, CAS. He received his Ph.D. degree from Tsinghua University in 2002.
Werner Bailer (JOANNEUM RESEARCH)
More from the Same Authors
-
2021 Spotlight: Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State »
Mingqing Xiao · Qingyan Meng · Zongpeng Zhang · Yisen Wang · Zhouchen Lin -
2021 : Amortized Causal Discovery: Learning to Infer Causal Graphs from Time-Series Data »
Sindy Löwe · David Madras · Richard Zemel · Max Welling -
2021 : Particle Dynamics for Learning EBMs »
Kirill Neklyudov · Priyank Jaini · Max Welling -
2022 Poster: Rethinking Knowledge Graph Evaluation Under the Open-World Assumption »
Haotong Yang · Zhouchen Lin · Muhan Zhang -
2022 : Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models »
Xingyu Xie · Pan Zhou · Huan Li · Zhouchen Lin · Shuicheng Yan -
2022 : PIPS: Path Integral Stochastic Optimal Control for Path Sampling in Molecular Dynamics »
Lars Holdijk · Yuanqi Du · Ferry Hooft · Priyank Jaini · Berend Ensing · Max Welling -
2022 : Equivariant 3D-Conditional Diffusion Models for Molecular Linker Design »
Ilia Igashov · Hannes Stärk · Clément Vignac · Victor Garcia Satorras · Pascal Frossard · Max Welling · Michael Bronstein · Bruno Correia -
2022 : Program Synthesis for Integer Sequence Generation »
Natasha Butt · Auke Wiggers · Taco Cohen · Max Welling -
2022 : Structure-based Drug Design with Equivariant Diffusion Models »
Arne Schneuing · Yuanqi Du · Charles Harris · Arian Jamasb · Ilia Igashov · weitao Du · Tom Blundell · Pietro Lió · Carla Gomes · Max Welling · Michael Bronstein · Bruno Correia -
2023 Poster: Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from a Minimax Game Perspective »
Yifei Wang · Liangchen Li · Jiansheng Yang · Zhouchen Lin · Yisen Wang -
2023 Poster: A Single-Loop Accelerated Extra-Gradient Difference Algorithm with Improved Complexity Bounds for Constrained Minimax Optimization »
Yuanyuan Liu · Fanhua Shang · Weixin An · Junhao Liu · Hongying Liu · Zhouchen Lin -
2023 Poster: Wasserstein Quantum Monte Carlo: A Novel Approach for Solving the Quantum Many-Body Schrödinger Equation »
Kirill Neklyudov · Jannes Nys · Luca Thiede · Juan Carrasquilla · Qiang Liu · Max Welling · Alireza Makhzani -
2023 Poster: Rotating Features for Object Discovery »
Sindy Löwe · Phillip Lippe · Francesco Locatello · Max Welling -
2023 Poster: Lie Point Symmetry and Physics-Informed Networks »
Tara Akhound-Sadegh · Laurence Perreault-Levasseur · Johannes Brandstetter · Max Welling · Siamak Ravanbakhsh -
2023 Poster: GEQ: Gaussian Kernel Inspired Equilibrium Models »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2023 Poster: Task-Robust Pre-Training for Worst-Case Downstream Adaptation »
Jianghui Wang · Yang Chen · Xingyu Xie · Cong Fang · Zhouchen Lin -
2023 Poster: Flow Factorized Representation Learning »
Yue Song · T. Anderson Keller · Nicu Sebe · Max Welling -
2023 Poster: Stochastic Optimal Control for Collective Variable Free Sampling of Molecular Transition Paths »
Lars Holdijk · Yuanqi Du · Ferry Hooft · Priyank Jaini · Berend Ensing · Max Welling -
2023 Oral: A Single-Loop Accelerated Extra-Gradient Difference Algorithm with Improved Complexity Bounds for Constrained Minimax Optimization »
Yuanyuan Liu · Fanhua Shang · Weixin An · Junhao Liu · Hongying Liu · Zhouchen Lin -
2023 Oral: Rotating Features for Object Discovery »
Sindy Löwe · Phillip Lippe · Francesco Locatello · Max Welling -
2023 Workshop: Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS'23) »
Jinghui Chen · Lixin Fan · Gauri Joshi · Sai Praneeth Karimireddy · Stacy Patterson · Shiqiang Wang · Han Yu -
2023 Workshop: AI for Science: from Theory to Practice »
Yuanqi Du · Max Welling · Yoshua Bengio · Marinka Zitnik · Carla Gomes · Jure Leskovec · Maria Brbic · Wenhao Gao · Kexin Huang · Ziming Liu · Rocío Mercado · Miles Cranmer · Shengchao Liu · Lijing Wang -
2022 Spotlight: Lightning Talks 4A-3 »
Zhihan Gao · Yabin Wang · Xingyu Qu · Luziwei Leng · Mingqing Xiao · Bohan Wang · Yu Shen · Zhiwu Huang · Xingjian Shi · Qi Meng · Yupeng Lu · Diyang Li · Qingyan Meng · Kaiwei Che · Yang Li · Hao Wang · Huishuai Zhang · Zongpeng Zhang · Kaixuan Zhang · Xiaopeng Hong · Xiaohan Zhao · Di He · Jianguo Zhang · Yaofeng Tu · Bin Gu · Yi Zhu · Ruoyu Sun · Yuyang (Bernie) Wang · Zhouchen Lin · Qinghu Meng · Wei Chen · Wentao Zhang · Bin CUI · Jie Cheng · Zhi-Ming Ma · Mu Li · Qinghai Guo · Dit-Yan Yeung · Tie-Yan Liu · Jianxing Liao -
2022 Spotlight: Online Training Through Time for Spiking Neural Networks »
Mingqing Xiao · Qingyan Meng · Zongpeng Zhang · Di He · Zhouchen Lin -
2022 Spotlight: Alleviating Adversarial Attacks on Variational Autoencoders with MCMC »
Anna Kuzina · Max Welling · Jakub Tomczak -
2022 : Invited Speaker »
Max Welling -
2022 : Invited Talk #4, The Fifth Paradigm of Scientific Discovery, Max Welling »
Max Welling -
2022 Workshop: AI for Science: Progress and Promises »
Yi Ding · Yuanqi Du · Tianfan Fu · Hanchen Wang · Anima Anandkumar · Yoshua Bengio · Anthony Gitter · Carla Gomes · Aviv Regev · Max Welling · Marinka Zitnik -
2022 Poster: Batch Bayesian Optimization on Permutations using the Acquisition Weighted Kernel »
Changyong Oh · Roberto Bondesan · Efstratios Gavves · Max Welling -
2022 Poster: Alleviating Adversarial Attacks on Variational Autoencoders with MCMC »
Anna Kuzina · Max Welling · Jakub Tomczak -
2022 Poster: Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnable Classifier at the End of Deep Neural Network? »
Yibo Yang · Shixiang Chen · Xiangtai Li · Liang Xie · Zhouchen Lin · Dacheng Tao -
2022 Poster: Towards Theoretically Inspired Neural Initialization Optimization »
Yibo Yang · Hong Wang · Haobo Yuan · Zhouchen Lin -
2022 Poster: On the symmetries of the synchronization problem in Cryo-EM: Multi-Frequency Vector Diffusion Maps on the Projective Plane »
Gabriele Cesa · Arash Behboodi · Taco Cohen · Max Welling -
2022 Poster: Online Training Through Time for Spiking Neural Networks »
Mingqing Xiao · Qingyan Meng · Zongpeng Zhang · Di He · Zhouchen Lin -
2021 : Particle Dynamics for Learning EBMs »
Kirill Neklyudov · Priyank Jaini · Max Welling -
2021 : General Discussion 1 - What is out of distribution (OOD) generalization and why is it important? with Yoshua Bengio, Leyla Isik, Max Welling »
Yoshua Bengio · Leyla Isik · Max Welling · Joshua T Vogelstein · Weiwei Yang -
2021 Workshop: Bayesian Deep Learning »
Yarin Gal · Yingzhen Li · Sebastian Farquhar · Christos Louizos · Eric Nalisnick · Andrew Gordon Wilson · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2021 : Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders »
T. Anderson Keller · Qinghe Gao · Max Welling -
2021 : Live Panel »
Max Welling · Bharath Ramsundar · Irina Rish · Karianne J Bergen · Pushmeet Kohli -
2021 : Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders »
T. Anderson Keller · Qinghe Gao · Max Welling -
2021 : Session 1 | Invited talk: Max Welling, "Accelerating simulations of nature, both classical and quantum, with equivariant deep learning" »
Max Welling · Atilim Gunes Baydin -
2021 Workshop: AI for Science: Mind the Gaps »
Payal Chandak · Yuanqi Du · Tianfan Fu · Wenhao Gao · Kexin Huang · Shengchao Liu · Ziming Liu · Gabriel Spadon · Max Tegmark · Hanchen Wang · Adrian Weller · Max Welling · Marinka Zitnik -
2021 Poster: On Training Implicit Models »
Zhengyang Geng · Xin-Yu Zhang · Shaojie Bai · Yisen Wang · Zhouchen Lin -
2021 Poster: Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions »
Emiel Hoogeboom · Didrik Nielsen · Priyank Jaini · Patrick Forré · Max Welling -
2021 Poster: Topographic VAEs learn Equivariant Capsules »
T. Anderson Keller · Max Welling -
2021 Poster: Dissecting the Diffusion Process in Linear Graph Convolutional Networks »
Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2021 : Unsupervised Indoor Wi-Fi Positioning »
Farhad G. Zanjani · Ilia Karmanov · Hanno Ackermann · Daniel Dijkman · Max Welling · Ishaque Kadampot · Simone Merlin · Steve Shellhammer · Rui Liang · Brian Buesker · Harshit Joshi · Vamsi Vegunta · Raamkumar Balamurthi · Bibhu Mohanty · Joseph Soriaga · Ron Tindall · Pat Lawlor -
2021 Poster: Learning Equivariant Energy Based Models with Equivariant Stein Variational Gradient Descent »
Priyank Jaini · Lars Holdijk · Max Welling -
2021 Poster: E(n) Equivariant Normalizing Flows »
Victor Garcia Satorras · Emiel Hoogeboom · Fabian Fuchs · Ingmar Posner · Max Welling -
2021 Poster: Gauge Equivariant Transformer »
Lingshen He · Yiming Dong · Yisen Wang · Dacheng Tao · Zhouchen Lin -
2021 Poster: Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State »
Mingqing Xiao · Qingyan Meng · Zongpeng Zhang · Yisen Wang · Zhouchen Lin -
2021 Poster: Efficient Equivariant Network »
Lingshen He · Yuxuan Chen · zhengyang shen · Yiming Dong · Yisen Wang · Zhouchen Lin -
2021 Poster: Modality-Agnostic Topology Aware Localization »
Farhad Ghazvinian Zanjani · Ilia Karmanov · Hanno Ackermann · Daniel Dijkman · Simone Merlin · Max Welling · Fatih Porikli -
2021 Poster: Dynamic Normalization and Relay for Video Action Recognition »
Dongqi Cai · Anbang Yao · Yurong Chen -
2021 Poster: Residual Relaxation for Multi-view Representation Learning »
Yifei Wang · Zhengyang Geng · Feng Jiang · Chuming Li · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2021 Oral: E(n) Equivariant Normalizing Flows »
Victor Garcia Satorras · Emiel Hoogeboom · Fabian Fuchs · Ingmar Posner · Max Welling -
2020 : Invited Talk: Max Welling - The LIAR (Learning with Interval Arithmetic Regularization) is Dead »
Max Welling -
2020 Poster: Natural Graph Networks »
Pim de Haan · Taco Cohen · Max Welling -
2020 Poster: SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks »
Fabian Fuchs · Daniel E Worrall · Volker Fischer · Max Welling -
2020 Poster: SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows »
Didrik Nielsen · Priyank Jaini · Emiel Hoogeboom · Ole Winther · Max Welling -
2020 Oral: SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows »
Didrik Nielsen · Priyank Jaini · Emiel Hoogeboom · Ole Winther · Max Welling -
2020 Poster: ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse Coding »
Yibo Yang · Hongyang Li · Shan You · Fei Wang · Chen Qian · Zhouchen Lin -
2020 Poster: The Convolution Exponential and Generalized Sylvester Flows »
Emiel Hoogeboom · Victor Garcia Satorras · Jakub Tomczak · Max Welling -
2020 Poster: Bayesian Bits: Unifying Quantization and Pruning »
Mart van Baalen · Christos Louizos · Markus Nagel · Rana Ali Amjad · Ying Wang · Tijmen Blankevoort · Max Welling -
2020 Poster: Experimental design for MRI by greedy policy search »
Tim Bakker · Herke van Hoof · Max Welling -
2020 Spotlight: Experimental design for MRI by greedy policy search »
Tim Bakker · Herke van Hoof · Max Welling -
2020 Poster: MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning »
Elise van der Pol · Daniel E Worrall · Herke van Hoof · Frans Oliehoek · Max Welling -
2019 : TBD »
Max Welling -
2019 : Keynote - ML »
Max Welling -
2019 : Opening remarks »
Lixin Fan -
2019 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Eric Nalisnick · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2019 Workshop: Workshop on Federated Learning for Data Privacy and Confidentiality »
Lixin Fan · Jakub Konečný · Yang Liu · Brendan McMahan · Virginia Smith · Han Yu -
2019 Poster: Invert to Learn to Invert »
Patrick Putzky · Max Welling -
2019 Poster: Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks »
Lixin Fan · Kam Woh Ng · Chee Seng Chan -
2019 Poster: Deep Scale-spaces: Equivariance Over Scale »
Daniel Worrall · Max Welling -
2019 Poster: Integer Discrete Flows and Lossless Compression »
Emiel Hoogeboom · Jorn Peters · Rianne van den Berg · Max Welling -
2019 Poster: The Functional Neural Process »
Christos Louizos · Xiahan Shi · Klamer Schutte · Max Welling -
2019 Poster: Combining Generative and Discriminative Models for Hybrid Inference »
Victor Garcia Satorras · Zeynep Akata · Max Welling -
2019 Spotlight: Combining Generative and Discriminative Models for Hybrid Inference »
Victor Garcia Satorras · Max Welling · Zeynep Akata -
2019 Poster: Combinatorial Bayesian Optimization using the Graph Cartesian Product »
Changyong Oh · Jakub Tomczak · Stratis Gavves · Max Welling -
2018 : Making the Case for using more Inductive Bias in Deep Learning »
Max Welling -
2018 : Panel disucssion »
Max Welling · Tim Genewein · Edwin Park · Song Han -
2018 : Efficient Computation of Deep Convolutional Neural Networks: A Quantization Perspective »
Max Welling -
2018 : Prof. Max Welling »
Max Welling -
2018 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew Wilson · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2018 Poster: Sparse DNNs with Improved Adversarial Robustness »
Yiwen Guo · Chao Zhang · Changshui Zhang · Yurong Chen -
2018 Poster: SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path-Integrated Differential Estimator »
Cong Fang · Chris Junchi Li · Zhouchen Lin · Tong Zhang -
2018 Spotlight: SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path-Integrated Differential Estimator »
Cong Fang · Chris Junchi Li · Zhouchen Lin · Tong Zhang -
2018 Poster: Joint Sub-bands Learning with Clique Structures for Wavelet Domain Super-Resolution »
Zhisheng Zhong · Tiancheng Shen · Yibo Yang · Zhouchen Lin · Chao Zhang -
2018 Poster: Graphical Generative Adversarial Networks »
Chongxuan LI · Max Welling · Jun Zhu · Bo Zhang -
2018 Poster: 3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data »
Maurice Weiler · Wouter Boomsma · Mario Geiger · Max Welling · Taco Cohen -
2017 : Panel Session »
Neil Lawrence · Finale Doshi-Velez · Zoubin Ghahramani · Yann LeCun · Max Welling · Yee Whye Teh · Ole Winther -
2017 : Deep Bayes for Distributed Learning, Uncertainty Quantification and Compression »
Max Welling -
2017 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew Wilson · Andrew Wilson · Diederik Kingma · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2017 : Panel session »
Iain Murray · Max Welling · Juan Carrasquilla · Anatole von Lilienfeld · Gilles Louppe · Kyle Cranmer -
2017 : Panel: On the Foundations and Future of Approximate Inference »
David Blei · Zoubin Ghahramani · Katherine Heller · Tim Salimans · Max Welling · Matthew D. Hoffman -
2017 : Invited talk 1: Deep recurrent inverse modeling for radio astronomy and fast MRI imaging »
Max Welling -
2017 Workshop: Advances in Approximate Bayesian Inference »
Francisco Ruiz · Stephan Mandt · Cheng Zhang · James McInerney · James McInerney · Dustin Tran · Dustin Tran · David Blei · Max Welling · Tamara Broderick · Michalis Titsias -
2017 Poster: Causal Effect Inference with Deep Latent-Variable Models »
Christos Louizos · Uri Shalit · Joris Mooij · David Sontag · Richard Zemel · Max Welling -
2017 Poster: Faster and Non-ergodic O(1/K) Stochastic Alternating Direction Method of Multipliers »
Cong Fang · Feng Cheng · Zhouchen Lin -
2017 Poster: Bayesian Compression for Deep Learning »
Christos Louizos · Karen Ullrich · Max Welling -
2016 : Max Welling : Making Deep Learning Efficient Through Sparsification »
Max Welling -
2016 Workshop: Bayesian Deep Learning »
Yarin Gal · Christos Louizos · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2016 Poster: Dynamic Network Surgery for Efficient DNNs »
Yiwen Guo · Anbang Yao · Yurong Chen -
2016 Poster: Improving Variational Autoencoders with Inverse Autoregressive Flow »
Diederik Kingma · Tim Salimans · Rafal Jozefowicz · Peter Chen · Xi Chen · Ilya Sutskever · Max Welling -
2015 Workshop: Scalable Monte Carlo Methods for Bayesian Analysis of Big Data »
Babak Shahbaba · Yee Whye Teh · Max Welling · Arnaud Doucet · Christophe Andrieu · Sebastian J. Vollmer · Pierre Jacob -
2015 : *Max Welling* Optimization Monte Carlo »
Max Welling -
2015 Symposium: Deep Learning Symposium »
Yoshua Bengio · Marc'Aurelio Ranzato · Honglak Lee · Max Welling · Andrew Y Ng -
2015 Poster: Bayesian dark knowledge »
Anoop Korattikara Balan · Vivek Rathod · Kevin Murphy · Max Welling -
2015 Poster: Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free Inference »
Ted Meeds · Max Welling -
2015 Poster: Accelerated Proximal Gradient Methods for Nonconvex Programming »
Huan Li · Zhouchen Lin -
2015 Poster: Variational Dropout and the Local Reparameterization Trick »
Diederik Kingma · Tim Salimans · Max Welling -
2014 Workshop: ABC in Montreal »
Max Welling · Neil D Lawrence · Richard D Wilkinson · Ted Meeds · Christian X Robert -
2014 Poster: Semi-supervised Learning with Deep Generative Models »
Diederik Kingma · Shakir Mohamed · Danilo Jimenez Rezende · Max Welling -
2014 Demonstration: Machine Learning in the Browser »
Ted Meeds · Remco Hendriks · Said Al Faraby · Magiel Bruntink · Max Welling -
2014 Spotlight: Semi-supervised Learning with Deep Generative Models »
Diederik Kingma · Shakir Mohamed · Danilo Jimenez Rezende · Max Welling -
2013 Workshop: Probabilistic Models for Big Data »
Neil D Lawrence · Joaquin Quiñonero-Candela · Tianshi Gao · James Hensman · Zoubin Ghahramani · Max Welling · David Blei · Ralf Herbrich -
2012 Poster: The Time-Marginalized Coalescent Prior for Hierarchical Clustering »
Levi Boyles · Max Welling -
2011 Poster: Statistical Tests for Optimization Efficiency »
Levi Boyles · Anoop Korattikara · Deva Ramanan · Max Welling -
2010 Poster: On Herding and the Perceptron Cycling Theorem »
Andrew E Gelfand · Yutian Chen · Laurens van der Maaten · Max Welling -
2008 Session: Oral session 10: Nonparametric Processes, Scene Processing and Image Statistics »
Max Welling -
2008 Poster: Asynchronous Distributed Learning of Topic Models »
Arthur Asuncion · Padhraic Smyth · Max Welling -
2007 Spotlight: Collapsed Variational Inference for HDP »
Yee Whye Teh · Kenichi Kurihara · Max Welling -
2007 Spotlight: Distributed Inference for Latent Dirichlet Allocation »
David Newman · Arthur Asuncion · Padhraic Smyth · Max Welling -
2007 Poster: Infinite State Bayes-Nets for Structured Domains »
Max Welling · Ian Porteous · Evgeniy Bart -
2007 Poster: Collapsed Variational Inference for HDP »
Yee Whye Teh · Kenichi Kurihara · Max Welling -
2007 Poster: Distributed Inference for Latent Dirichlet Allocation »
David Newman · Arthur Asuncion · Padhraic Smyth · Max Welling -
2007 Spotlight: Infinite State Bayes-Nets for Structured Domains »
Max Welling · Ian Porteous · Evgeniy Bart -
2006 Poster: Structure Learning in Markov Random Fields »
Sridevi Parise · Max Welling -
2006 Poster: Accelerated Variational Dirichlet Process Mixtures »
Kenichi Kurihara · Max Welling · Nikos Vlassis -
2006 Spotlight: Accelerated Variational Dirichlet Process Mixtures »
Kenichi Kurihara · Max Welling · Nikos Vlassis -
2006 Poster: A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation »
Yee Whye Teh · David Newman · Max Welling