Timezone: »

Deep Learning: Bridging Theory and Practice
Sanjeev Arora · Maithra Raghu · Russ Salakhutdinov · Ludwig Schmidt · Oriol Vinyals

Sat Dec 09 08:00 AM -- 06:30 PM (PST) @ Hall A
Event URL: https://ludwigschmidt.github.io/nips17-dl-workshop-website/ »

The past five years have seen a huge increase in the capabilities of deep neural networks. Maintaining this rate of progress however, faces some steep challenges, and awaits fundamental insights. As our models become more complex, and venture into areas such as unsupervised learning or reinforcement learning, designing improvements becomes more laborious, and success can be brittle and hard to transfer to new settings.

This workshop seeks to highlight recent works that use theory as well as systematic experiments to isolate the fundamental questions that need to be addressed in deep learning. These have helped flesh out core questions on topics such as generalization, adversarial robustness, large batch training, generative adversarial nets, and optimization, and point towards elements of the theory of deep learning that is expected to emerge in the future.

The workshop aims to enhance this confluence of theory and practice, highlighting influential work with these methods, future open directions, and core fundamental problems. There will be an emphasis on discussion, via panels and round tables, to identify future research directions that are promising and tractable.

Sat 8:35 a.m. - 8:45 a.m.
Opening Remarks (Talk)
Sat 8:45 a.m. - 9:15 a.m.

Generalization, Memorization and SGD

Sat 9:15 a.m. - 9:45 a.m.

Bridging Theory and Practice of GANs

Sat 9:45 a.m. - 10:00 a.m.

1) Generalization in deep nets: the role of distance from initialization 2) Entropy-SG(L)D optimizes the prior of a (valid) PAC-Bayes bound 3) Large Batch Training of DNNs with Layer-wise Adaptive Rate Scaling

Sat 10:00 a.m. - 10:30 a.m.

Generalization in Deep Networks

Sat 10:30 a.m. - 11:00 a.m.
Coffee (Break)
Sat 11:00 a.m. - 11:30 a.m.

Experimental design in Deep Reinforcement Learning

Sat 11:30 a.m. - 11:45 a.m.

1) Measuring robustness of NNs via Minimal Adversarial Examples 2) A classification based perspective on GAN-distributions 3) Learning one hidden layer neural nets with landscape design

Sat 11:45 a.m. - 1:30 p.m.
Poster Session 1 and Lunch (Poster Session)
Sumanth Dathathri, Akshay Rangamani, Prakhar Sharma, Aruni RoyChowdhury, Madhu Advani, William Guss, Chulhee Yun, Corentin Hardy, Michele Alberti, Devendra Sachan, Andreas Veit, Takashi Shinozaki, Peter Chin
Sat 1:30 p.m. - 2:00 p.m.

Fighting Black Boxes, Adversaries, and Bugs in Deep Learning

Sat 2:00 p.m. - 3:00 p.m.

1) Don't Decay the Learning Rate, Increase the Batch Size 2) Meta-Learning and Universality: Deep Representations and Gradient Descent Can Approximate Any Learning Algorithm 3) Hyperparameter Optimization: A Spectral Approach 4) Learning Implicit Generative Models with Method of Learned Moments

Sat 3:00 p.m. - 4:00 p.m.
Poster Session 2 (Poster Session)
Sat 4:00 p.m. - 4:30 p.m.

Towards Bridging Theory and Practice in DeepRL

Sat 4:30 p.m. - 5:30 p.m.
Panel (Discussion Panel)

Author Information

Sanjeev Arora (Princeton University)
Maithra Raghu (Cornell University and Google Brain)
Russ Salakhutdinov (Carnegie Mellon University)
Ludwig Schmidt (MIT)
Oriol Vinyals (Google DeepMind)

Oriol Vinyals is a Research Scientist at Google. He works in deep learning with the Google Brain team. Oriol holds a Ph.D. in EECS from University of California, Berkeley, and a Masters degree from University of California, San Diego. He is a recipient of the 2011 Microsoft Research PhD Fellowship. He was an early adopter of the new deep learning wave at Berkeley, and in his thesis he focused on non-convex optimization and recurrent neural networks. At Google Brain he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, language, and vision.

More from the Same Authors