Timezone: »
Not all neural network architectures are created equal, some perform much better than others for certain tasks. But how important are the weight parameters of a neural network compared to its architecture? In this work, we question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. We propose a search method for neural network architectures that can already perform a task without any explicit weight training. To evaluate these networks, we populate the connections with a single shared weight parameter sampled from a uniform random distribution, and measure the expected performance. We demonstrate that our method can find minimal neural network architectures that can perform several reinforcement learning tasks without weight training. On a supervised learning domain, we find network architectures that achieve much higher than chance accuracy on MNIST using random weights.
Interactive version of this paper at https://weightagnostic.github.io/
Author Information
Adam Gaier (Google / Inria / H-BRS)
David Ha (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Spotlight: Weight Agnostic Neural Networks »
Wed. Dec 11th 01:25 -- 01:30 AM Room West Ballroom A + B
More from the Same Authors
-
2019 : Innate Bodies, Innate Brains, and Innate World Models »
David Ha -
2019 Poster: Learning to Predict Without Looking Ahead: World Models Without Forward Prediction »
Daniel Freeman · David Ha · Luke Metz -
2018 : David Ha »
David Ha -
2018 Poster: Recurrent World Models Facilitate Policy Evolution »
David Ha · Jürgen Schmidhuber -
2018 Oral: Recurrent World Models Facilitate Policy Evolution »
David Ha · Jürgen Schmidhuber -
2017 Workshop: Machine Learning for Creativity and Design »
Douglas Eck · David Ha · S. M. Ali Eslami · Sander Dieleman · Rebecca Fiebrink · Luba Elliott