Timezone: »
Deep Neural Networks have been revolutionizing several application domains in artificial intelligence: Computer Vision, Speech Recognition and Natural Language Processing. Concurrent to the recent progress in deep learning, significant progress has been happening in virtual reality, augmented reality, and smart wearable devices. These advances create unprecedented opportunities for researchers to tackle fundamental challenges in deploying deep learning systems to portable devices with limited resources (e.g. Memory, CPU, Energy, Bandwidth). Efficient methods in deep learning can have crucial impacts in using distributed systems, embedded devices, and FPGA for several AI tasks. Achieving these goals calls for ground-breaking innovations on many fronts: learning, optimization, computer architecture, data compression, indexing, and hardware design.
This workshop is sponsored by Allen Institute for Artificial Intelligence (AI2). We offer partial travel grant and registration for limited number of people participating in the workshop.
The goal of this workshop is providing a venue for researchers interested in developing efficient techniques for deep neural networks to present new work, exchange ideas, and build connections. The workshop will feature keynotes and invited talks from prominent researchers as well as a poster session that fosters in depth discussion. Further, in a discussion panel the experts discuss about the possible approaches (hardware, software, algorithm, ...) toward designing efficient methods in deep learning.
We invite submissions of short papers and extended abstracts related to the following topics in the context of efficient methods in deep learning:
-Network compression
-Quantized neural networks (e.g. Binary neural networks)
-Hardware accelerator for neural networks
-Training and inference with low-precision operations.
-Real-time applications in deep neural networks (e.g. Object detection, Image segmentation, Online language translation, ...)
-Distributed training/inference of deep neural networks
-Fast optimization methods for neural networks
Fri 12:00 a.m. - 12:15 a.m.
|
Mohammad Rastegari: Introductory remarks
(
Talk
)
|
Mohammad Rastegari 🔗 |
Fri 12:15 a.m. - 12:45 a.m.
|
William Dally: Efficient Methods and Hardware for Deep Neural Networks
(
Talk
)
|
Bill Dally 🔗 |
Fri 12:45 a.m. - 1:15 a.m.
|
Amir Khosrowshahi: Processor architectures for deep learning
(
Talk
)
|
Amir Khosrowshahi 🔗 |
Fri 2:00 a.m. - 2:30 a.m.
|
Ali Farhadi: Deep Learning on Resource Constraint Devices
(
Talk
)
|
Ali Farhadi 🔗 |
Fri 2:30 a.m. - 3:00 a.m.
|
Oral Presentations (Session A)
(
Talk
)
|
🔗 |
Fri 3:00 a.m. - 4:30 a.m.
|
Lunch (on your own)
|
🔗 |
Fri 4:30 a.m. - 5:00 a.m.
|
Vivienne Sze: Joint Design of Algorithms and Hardware for Energy-efficient DNNs
(
Talk
)
|
Vivienne Sze 🔗 |
Fri 5:00 a.m. - 5:30 a.m.
|
Yoshua Bengio: From Training Low Precision Neural Nets to Training Analog Continuous-Time Machines
(
Talk
)
|
Yoshua Bengio 🔗 |
Fri 5:30 a.m. - 6:30 a.m.
|
Poster presentations and Coffee break
(
Poster and Coffee break
)
|
🔗 |
Fri 6:30 a.m. - 7:00 a.m.
|
Kurt Keutzer: High-Performance Deep Learning
(
Talk
)
|
Kurt Keutzer 🔗 |
Fri 7:00 a.m. - 7:30 a.m.
|
Oral Presentations (Session B)
(
Talk
)
|
🔗 |
Fri 7:30 a.m. - 7:45 a.m.
|
Mohammad Rastegari: Closing remarks
(
Talk
)
|
🔗 |
Author Information
Mohammad Rastegari (Allen Institute for Artificial Intelligence (AI2))
Matthieu Courbariaux (Université de Montréal)
More from the Same Authors
-
2019 Poster: Discovering Neural Wirings »
Mitchell Wortsman · Ali Farhadi · Mohammad Rastegari -
2016 : Mohammad Rastegari: Introductory remarks »
Mohammad Rastegari -
2016 Poster: Binarized Neural Networks »
Itay Hubara · Matthieu Courbariaux · Daniel Soudry · Ran El-Yaniv · Yoshua Bengio -
2015 Poster: BinaryConnect: Training Deep Neural Networks with binary weights during propagations »
Matthieu Courbariaux · Yoshua Bengio · Jean-Pierre David