Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Women in Machine Learning

Decomposed Linear Dynamical Systems (dLDS) for learning the latent components of neural dynamics

Noga Mudrik · Yenho Chen · Eva Yezerets · Christopher Rozell · Adam Charles


Abstract:

Learning interpretable representations of neural dynamics at a population level is a crucial step to understanding how neural activity relates to perception and behavior. Models of neural dynamics often focus on either low-dimensional projections of neural activity or on dynamical systems models. While both approaches seek to represent low-dimensional geometric structures, we currently lack methods that integrate the manifold hypothesis directly into a dynamical systems model, thus maintaining both model capacity and interpretability. Here, we discuss how these two approaches are interrelated by considering dynamical systems as representative of flows on a low-dimensional manifold. We propose a new decomposed dynamical system model (dLDS), that can describe complex non-stationary and nonlinear dynamics of time-series data as a sparse combination of simpler, more interpretable components, chosen from a dictionary of linear dynamical systems (LDSs). The decomposed nature of the dynamics in our model generalizes over previous approaches and enables modeling of overlapping and non-stationary drifts in the dynamics, as well as dynamics with different speeds or orientations. Our model-learning provides an avenue by which we can estimate dynamical systems that are locally linear at each point, but whose parameters change over time, and thus able to approximate nonlinear dynamics by treating the nonlinearity as a temporal non-stationarity. First, we demonstrate our model in a synthetic experiment where we recover efficient representations of an LDS with time-varying speeds and rotations, and contrast our results with existing similar models. Next, we apply our model to the Fitzhugh Nagumo and Lorenz attractors, showing that it manages to identify meaningful dynamical components that indicate different sides of the Lorenz spirals. When applying it to C. elegans neural recordings, we were able to illustrate a diversity of dynamics that was obscured in previous similar models.

Chat is not available.