Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Symposium
Thu Dec 08 05:00 AM -- 12:30 PM (PST) @ Area 3
Recurrent Neural Networks and Other Machines that Learn Algorithms
Jürgen Schmidhuber · Sepp Hochreiter · Alex Graves · Rupesh K Srivastava
[ Video 1 [ Video 2 [ Video 3





Symposium Home Page

Soon after the birth of modern computer science in the 1930s, two fundamental questions arose: 1. How can computers learn useful programs from experience, as opposed to being programmed by human programmers? 2. How to program parallel multiprocessor machines, as opposed to traditional serial architectures? Both questions found natural answers in the field of Recurrent Neural Networks (RNNs), which are brain-inspired general purpose computers that can learn parallel-sequential programs or algorithms encoded as weight matrices.

Our first RNNaissance NIPS workshop dates back to 2003: http://people.idsia.ch/~juergen/rnnaissance.html . Since then, a lot has happened. Some of the most successful applications in machine learning (including deep learning) are now driven by RNNs such as Long Short-Term Memory, e.g., speech recognition, video recognition, natural language processing, image captioning, time series prediction, etc. Through the world's most valuable public companies, billions of people have now access to this technology through their smartphones and other devices, e.g., in the form of Google Voice or on Apple's iOS. Reinforcement-learning and evolutionary RNNs are solving complex control tasks from raw video input. Many RNN-based methods learn sequential attention strategies.

Here we will review the latest developments in all of these fields, and focus not only on RNNs, but also on learning machines in which RNNs interact with external memory such as neural Turing machines, memory networks, and related memory architectures such as fast weight networks and neural stack machines. In this context we will also will discuss asymptotically optimal program search methods and their practical relevance.

Our target audience has heard a bit about recurrent neural networks but will happy to hear again a summary of the basics, and then delve into the latest advanced stuff, to see and understand what has recently become possible. We are hoping for thousands of attendees.

All talks (mostly by famous experts in the field who have already agreed to speak) will be followed by open discussions. We will also have a call for posters. Selected posters will adorn the environment of the lecture hall. We will also have a panel discussion on the bright future of RNNs, and their pros and cons.