Skip to yearly menu bar Skip to main content


Oral Poster

NeuroClips: Towards High-fidelity and Smooth fMRI-to-Video Reconstruction

Zixuan Gong · Yu Zhang · Guangyin Bao · Zhongwei Wan · Shoujin Wang · Lei Zhu · Changwei Wang · Rongtao Xu · Liang Hu · Ke Liu · Qi Zhang

East Exhibit Hall A-C #3905
[ ] [ Project Page ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST
 
Oral presentation: Oral Session 1A: Neuroscience and Intepretability
Wed 11 Dec 10 a.m. PST — 11 a.m. PST

Abstract:

Reconstruction of static visual stimuli from non-invasion brain activity fMRI achieves great success, owning to advanced deep learning models such as CLIP and Stable Diffusion. However, the research on fMRI-to-video reconstruction remains limited since decoding the spatiotemporal perception of continuous visual experiences is formidably challenging. We contend that the key to addressing these challenges lies in accurately decoding both high-level semantics and low-level perception flows, as perceived by the brain in response to video stimuli. To the end, we propose NeuroClips, an innovative framework to decode high-fidelity and smooth video from fMRI. NeuroClips utilizes a semantics reconstructor to reconstruct video keyframes, guiding semantic accuracy and consistency, and employs a perception reconstructor to capture low-level perceptual details, ensuring video smoothness. During inference, it adopts a pre-trained T2V diffusion model injected with both keyframes and low-level perception flows for video reconstruction. Evaluated on a publicly available fMRI-video dataset, NeuroClips achieves smooth high-fidelity video reconstruction of up to 6s at 8FPS, gaining significant improvements over state-of-the-art models in various metrics, e.g., a 128% improvement in SSIM and an 81% improvement in spatiotemporal metrics. Our project is available at https://anonymous.4open.science/r/NeuroClips-72DC/

Live content is unavailable. Log in and register to view live content