Timezone: »

Neural Networks for Efficient Bayesian Decoding of Natural Images from Retinal Neurons
Nikhil Parthasarathy · Eleanor Batty · William Falcon · Thomas Rutten · Mohit Rajpal · E.J. Chichilnisky · Liam Paninski

Wed Dec 06 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #147

Decoding sensory stimuli from neural signals can be used to reveal how we sense our physical environment, and is valuable for the design of brain-machine interfaces. However, existing linear techniques for neural decoding may not fully reveal or exploit the fidelity of the neural signal. Here we develop a new approximate Bayesian method for decoding natural images from the spiking activity of populations of retinal ganglion cells (RGCs). We sidestep known computational challenges with Bayesian inference by exploiting artificial neural networks developed for computer vision, enabling fast nonlinear decoding that incorporates natural scene statistics implicitly. We use a decoder architecture that first linearly reconstructs an image from RGC spikes, then applies a convolutional autoencoder to enhance the image. The resulting decoder, trained on natural images and simulated neural responses, significantly outperforms linear decoding, as well as simple point-wise nonlinear decoding. These results provide a tool for the assessment and optimization of retinal prosthesis technologies, and reveal that the retina may provide a more accurate representation of the visual scene than previously appreciated.

Author Information

Nikhil Parthasarathy (New York University)
Eleanor Batty (Columbia University)
William Falcon (NYU CILVR)
Thomas Rutten (Columbia University)
Mohit Rajpal (Columbia University)
E.J. Chichilnisky (Stanford University)
Liam Paninski (Columbia University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors