Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Shared Visual Representations in Human and Machine Intelligence (SVRHM)

Redundancy and dependency in brain activities

Sikun Lin · Thomas Sprague · Ambuj K Singh


Abstract:

How many signals in the brain activities can be erased before the encoded information is lost? Surprisingly, we found that both reconstruction and classification of voxel activities can still achieve relatively good performance even after losing 80%-90% of the signals. This leads to questions regarding how the brain performs encoding in such a robust manner. This paper investigates the redundancy and dependency of brain signals using two deep learning models with minimal inductive bias (linear layers). Furthermore, we explored the alignment between the brain and semantic representations, how redundancy differs for different stimuli and regions, as well as the dependency between brain voxels and regions.

Chat is not available.