Skip to yearly menu bar Skip to main content


Poster

Provable Benefits of Complex Parameterizations for Structured State Space Models

Yuval Ran-Milo · Eden Lumbroso · Edo Cohen-Karlik · Raja Giryes · Amir Globerson · Nadav Cohen

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Structured state space models (SSMs), the core engine behind prominent neural networks such as S4 and Mamba, are linear dynamical systems adhering to a specified structure, most notably diagonal. In contrast to typical neural network modules, whose parameterizations are real, SSMs often use complex parameterizations. Theoretically explaining the benefits of complex parameterizations for SSMs is an open problem. The current paper takes a step towards its resolution, by establishing formal gaps between real and complex diagonal SSMs. Firstly, we prove that while a moderate dimension suffices in order for a complex SSM to express all mappings of a real SSM, a much higher dimension is needed for a real SSM to express (or even approximate) mappings of a complex SSM. Secondly, we prove that even if the dimension of a real SSM is high enough to express a given mapping, typically, doing so (or even just approximating the given mapping) requires the parameters of the real SSM to hold exponential values, which cannot be learned in practice. In contrast, a complex SSM can express any given mapping with moderate parameter values. Experiments corroborate our theory, and point to potential extensions towards fully explaining the benefits of complex parameterizations for SSMs.

Live content is unavailable. Log in and register to view live content