Timezone: »

Compression supports low-dimensional representations of behavior across neural circuits
Dale Zhou · Jason Kim · Adam Pines · Valerie Sydnor · David Roalf · John Detre · Ruben Gur · Raquel Gur · Theodore Satterthwaite · Danielle S Bassett

Dimensionality reduction, a form of compression, can simplify representations of information to increase efficiency and reveal general patterns. Yet, this simplification also forfeits information, thereby reducing representational capacity. Hence, the brain may benefit from generating both compressed and uncompressed activity, and may do so in a heterogeneous manner across diverse neural circuits that represent low-level (sensory) or high-level (cognitive) stimuli. However, precisely how compression and representational capacity differ across the cortex remains unknown. Here we predict different levels of compression across regional circuits by using random walks on networks to model activity flow and to formulate rate-distortion functions, which are the basis of lossy compression. Using a large sample of youth ($n=1,040$), we test predictions in two ways: by measuring the dimensionality of spontaneous activity from sensorimotor to association cortex, and by assessing the representational capacity for 24 behaviors in neural circuits and 20 cognitive variables in recurrent neural networks. Our network theory of compression predicts the dimensionality of activity ($t=12.13, p<0.001$) and the representational capacity of biological ($r=0.53, p=0.016$) and artificial ($r=0.61, p<0.001$) networks. The model suggests how a basic form of compression is an emergent property of activity flow between distributed circuits that communicate with the rest of the network.

#### Author Information

##### Jason Kim (Cornell University)

I am generally interested in designing and programming functions in distributed systems. More specifically, I am interested in how recurrent neural networks represent information, and how to program their interactions to run algorithms. I am currently working on how to decompile the internal models and algorithms learned by RNNs from their dynamics and recurrent weights. My long term goal is to develop a low-level understanding of distributed neural computation which allows us to engineer neural networks with the same degree of precision and algorithmic complexity as silicon computers. My undergraduate degree was in Biomedical Engineering at Duke University, where I worked with Dr. Marc Sommer on characterizing the response of biological neurons to transcranial magnetic stimulation in macaques. I did my PhD in Bioengineering at the University of Pennsylvania with D. Dani Bassett on designing advanced functions in complex neural and mechanical systems. I am currently a postdoctoral fellow in Physics at Cornell University working with Dr. Itai Cohen and Dr. James Sethna working to interface distributed neural computing with microrobotics.