Timezone: »
The seminal work by Krizhevsky, Sutskever, and Hinton at NIPS two years ago kicked off a craze of training convnets on ImageNet. These networks have since been trained, improved, tweaked, and adapted to new tasks by those in the field dozens of times over. However, although a stupendous amount of recent progress has been made, the elegant inner workings of these networks are generally hidden behind scalar error metrics.
This is a shame.
In this demo, we attempt to expose some of these inner workings by allowing people to interact with a video camera hooked to a trained convnet. Audience members can see in real time the activations they cause throughout the network and the classification decisions that are made, which we have found surprisingly fun and informative.
Author Information
Jason Yosinski (Windscape AI / ML Collective)
Hod Lipson (Cornell University)
More from the Same Authors
-
2014 Poster: How transferable are features in deep neural networks? »
Jason Yosinski · Jeff Clune · Yoshua Bengio · Hod Lipson -
2014 Oral: How transferable are features in deep neural networks? »
Jason Yosinski · Jeff Clune · Yoshua Bengio · Hod Lipson