Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: Mind the Gaps

Linear Transformations in Autoencoder Latent Space Predict Time Translations in Active Matter System

Enrique Amaya · Shahriar Shadkhoo · Dominik Schildknecht · Matt Thomson


Abstract:

Machine Learning (ML) approaches are promising for deriving dynamical predictions of physical systems from data. ML approaches are relevant in active matter, a field that spans scales and studies dynamics of far-from-equilibrium systems where there are significant challenges in predicting macroscopic behavior from microscopic interactions of active particles. A major challenge in applying ML to active systems is encoding a continuous representation of time within a neural network. In this work, we develop a framework for predicting the dynamics of active networks of protein filaments and motors by combining a low-dimensional latent representation inferred through an autoencoder with a linear shift neural network that encodes time translation as a linear transformation within the latent space. Our method enables predicting the contraction and boundary deformations of active networks with various geometries. Although our method is trained to predict 20 time steps into the future, it can generalize to periods of 60 time steps and recapitulate the past 30 frames of a single given observation with less than 10\% error. Finally, we derive an approximate analytic expression for the linear transformation in the latent space that captures the dynamics. Broadly, our study reveals that neural networks are powerful for forecasting the behavior of active matter systems in the complete absence of knowledge of the microscopic dynamics.