Timezone: »

S4ND: Modeling Images and Videos as Multidimensional Signals with State Spaces
Eric Nguyen · Karan Goel · Albert Gu · Gordon Downs · Preey Shah · Tri Dao · Stephen Baccus · Christopher Ré

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #918
Visual data such as images and videos are typically modeled as discretizations of inherently continuous, multidimensional signals. Existing continuous-signal models attempt to exploit this fact by modeling the underlying signals of visual (e.g., image) data directly. However, these models have not yet been able to achieve competitive performance on practical vision tasks such as large-scale image and video classification. Building on a recent line of work on deep state space models (SSMs), we propose \method, a new multidimensional SSM layer that extends the continuous-signal modeling ability of SSMs to multidimensional data including images and videos. We show that S4ND can model large-scale visual data in $1$D, $2$D, and $3$D as continuous multidimensional signals and demonstrates strong performance by simply swapping Conv2D and self-attention layers with \method\ layers in existing state-of-the-art models. On ImageNet-1k, \method\ exceeds the performance of a Vision Transformer baseline by $1.5\%$ when training with a $1$D sequence of patches, and matches ConvNeXt when modeling images in $2$D. For videos, S4ND improves on an inflated $3$D ConvNeXt in activity classification on HMDB-51 by $4\%$. S4ND implicitly learns global, continuous convolutional kernels that are resolution invariant by construction, providing an inductive bias that enables generalization across multiple resolutions. By developing a simple bandlimiting modification to S4 to overcome aliasing, S4ND achieves strong zero-shot (unseen at training time) resolution performance, outperforming a baseline Conv2D by $40\%$ on CIFAR-10 when trained on $8 \times 8$ and tested on $32 \times 32$ images. When trained with progressive resizing, S4ND comes within $\sim 1\%$ of a high-resolution model while training $22\%$ faster.

Author Information

Eric Nguyen (Stanford University)

PhD student at Stanford, computer vision, deep learning, bioengineering

Karan Goel (Stanford University)
Albert Gu (Stanford)
Gordon Downs (Stanford University)

I'm a Master's student in CS at Stanford University. Before that, I graduated from the University of Arizona with BS degrees in electrical and computer engineering and applied math, and I worked on NASA's Curiosity Rover mission science team.

Preey Shah (Computer Science Department, Stanford University)
Tri Dao (Stanford University)
Stephen Baccus (Stanford University)
Christopher Ré (Stanford)

More from the Same Authors