Skip to yearly menu bar Skip to main content


Poster

Controlled maximal variability along with reliable performance in recurrent neural networks

Chiara Mastrogiuseppe · Ruben Moreno Bote

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Natural behaviors, even stereotyped ones, exhibit variability. Despite its role in exploring and learning, the neural basis of this variability is still not well understood. Given the coupling between neural activity and behavior, we ask what type of neural variability does not compromise behavioral performance. While previous studies typically curtail variability to enhance task performance in neural networks, our approach takes the reversed perspective. We investigate the possibility of generating maximal neural variability while at the same time permitting high network's functionality. To do so, we extend to neural activity the maximum occupancy principle (MOP) developed for behavior. We assume that the goal of the neural network is not driven by extrinsic task-related rewards but it is to maximize future action-state entropy, which entails creating all possible activity patterns while avoiding terminal or dangerous states.We show that this goal can be achieved through a neural network controller that learns to inject currents (actions) into a recurrent neural network of fixed random weights while maximizing the future cumulative action-state entropy. We demonstrate that large variability can be induced in the network while adhering to a maximum energy constraint or while avoiding terminal states on specific neurons' activities. Further, the network solves a context-dependent drawing task by flexibly switching between stochastic and deterministic modes as needed and projecting noise onto a null space. Based on future entropy production, these results contribute to a novel theory of neural variability that reconciles stochastic and deterministic behaviors within a single framework.

Live content is unavailable. Log in and register to view live content