Skip to yearly menu bar Skip to main content



in
Session: Creative AI Session 3

NeuroView: Generative Visualization of the Diversity of Brain Responses to Jazz

Hall D1 (level 1) Table 8
[ ]
Thu 14 Dec 8:45 a.m. PST — 12:15 p.m. PST

Abstract:

How we perceive the world around us is intrinsically linked to the environments in which we live, the people with whom we interact, and the experiences we’ve had. This subjective reality has explained in part why we like the music we do, which films make us cry, and how certain smells can so quickly bring us back to key moments in our lives. A monumental discovery in neuroscience is that these subjective experiences we share can in part be measured through electroencelography (EEG). EEG is a non-invasive technique which utilizes electrodes placed on the surface of the head to measure electric fields resulting from activity of collections of neurons acting in concert. These electrodes are positioned across the entire head, allowing for measurement of different neural structures related to diverse activities such as auditory processing, volitional movement, and visual processing, among many more. In this work, we present NeuroView, an AI enabled EEG-based brain computer interface to visualize the subjective experience of jazz music. Jazz represents a fusion of diverse cultures and experiences while also being reflective of the general human experience. Originally formed in the African American communities of New Orleans, jazz strongly reflects the communities in which it is played, with unique forms arising in New York, Minneapolis, New Orleans, and Los Angeles. What unites all jazz is the drive for the musicians to collectively form a voice through their instrument while also listening and supporting fellow musicians in improvisational jam sessions. As such, jazz takes influence from all people who are open to its form and creates from it something more. Emotional processing as recorded through EEG is processed in a VQGAN generative neural network co-trained on the ImageNet dataset to produce surrealist music videos which aim to enter the uncanny valley of thought and neural processes. NeuroView thus aims to explore how differences in brain activity and the perception and reception of music can be visualized using generative VQGAN models in ways that highlight this diversity of emotional experience as something beautiful to be celebrated and contribute to our collective humanity.

Chat is not available.