Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Deep Generative Models for Health

Spotlight: Synthetic Sleep EEG Signal Generation using Latent Diffusion Models

Bruno Aristimunha · Raphael Yokoingawa de Camargo · Sylvain Chevallier · Oeslle Lucena · Adam Thomas · M. Jorge Cardoso · Walter Lopez Pinaya · Jessica Dafflon

[ ] [ Project Page ]
Fri 15 Dec 1:40 p.m. PST — 1:55 p.m. PST

Abstract:

Electroencephalography (EEG) is a non-invasive method that allows for recording rich temporal information and is a valuable tool for diagnosing various neurological and psychiatric conditions. One of the main limitations of EEG is the low signal-to-noise ratio and the lack of data availability to train large data-hungry neural networks. Sharing large healthcare datasets is crucial to advancing medical imaging research, but privacy concerns often impede such efforts. Deep generative models have gained attention as a way to circumvent data-sharing limitations and as a possible way to generate data to improve the performance of these models. This work investigates latent diffusion models with spectral loss as deep generative modeling to generate 30-second windows of synthetic EEG signals of sleep stages. The spectral loss is essential to guarantee that the generated signal contains structured oscillations on specific frequency bands that are typical of EEG signals. We trained our models using two large sleep datasets (\emph{Sleep EDFx} and \emph{SHHS}) and used the Multi-Scale Structural Similarity Metric, Frechet inception distance, and a spectrogram analysis to evaluate the quality of synthetic signals. We demonstrate that the latent diffusion model can generate realistic signals with the correct neural oscillation and could, therefore, be used to overcome the scarcity of EEG data.

Chat is not available.