Timezone: »

 
Invited Talk
Dropout: A simple and effective way to improve neural networks
Geoffrey E Hinton · George Dahl

Thu Dec 06 10:50 AM -- 11:40 AM (PST) @ Harveys Convention Center Floor, CC

In a large feedforward neural network, overfitting can be greatly reduced by randomly omitting half of the hidden units on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random “dropout” gives big improvements on many benchmark tasks and sets new records for object recognition and molecular activity prediction.

The Merck Molecular Activity Challenge was a contest hosted by Kaggle and sponsored by the pharmaceutical company Merck. The goal of the contest was to predict whether molecules were highly active towards a given target molecule. The competition data included a large number of numerical descriptors generated from the chemical structures of the input molecules and activity data for fifteen different biologically relevant targets. An accurate model has numerous applications in the drug discovery process. George will discuss his team's first place solution based on neural networks trained with dropout.

Author Information

Geoffrey E Hinton (Google & University of Toronto)

Geoffrey Hinton received his PhD in Artificial Intelligence from Edinburgh in 1978 and spent five years as a faculty member at Carnegie-Mellon where he pioneered back-propagation, Boltzmann machines and distributed representations of words. In 1987 he became a fellow of the Canadian Institute for Advanced Research and moved to the University of Toronto. In 1998 he founded the Gatsby Computational Neuroscience Unit at University College London, returning to the University of Toronto in 2001. His group at the University of Toronto then used deep learning to change the way speech recognition and object recognition are done. He currently splits his time between the University of Toronto and Google. In 2010 he received the NSERC Herzberg Gold Medal, Canada's top award in Science and Engineering.

George Dahl (Google Brain)

George Dahl is a research scientist on the Brain team at Google working on deep learning.

More from the Same Authors