Linear dimensionality reduction methods are commonly used to extract low-dimensional structure from high-dimensional data. However, popular methods disregard temporal structure, rendering them prone to extracting noise rather than meaningful dynamics when applied to time series data. At the same time, many successful unsupervised learning methods for temporal, sequential and spatial data extract features which are predictive of their surrounding context. Combining these approaches, we introduce Dynamical Components Analysis (DCA), a linear dimensionality reduction method which discovers a subspace of high-dimensional time series data with maximal predictive information, defined as the mutual information between the past and future. We test DCA on synthetic examples and demonstrate its superior ability to extract dynamical structure compared to commonly used linear methods. We also apply DCA to several real-world datasets, showing that the dimensions extracted by DCA are more useful than those extracted by other methods for predicting future states and decoding auxiliary variables. Overall, DCA robustly extracts dynamical structure in noisy, high-dimensional data while retaining the computational efficiency and geometric interpretability of linear dimensionality reduction methods.
David Clark (Columbia University)
Jesse Livezey (Lawrence Berkeley National Laboratory)
Kristofer Bouchard (Lawrence Berkeley National Laboratory)
More from the Same Authors
2017 Poster: Union of Intersections (UoI) for Interpretable Data Driven Discovery and Prediction »
Kristofer Bouchard · Alejandro Bujan · Farbod Roosta-Khorasani · Shashanka Ubaru · Mr. Prabhat · Antoine Snijders · Jian-Hua Mao · Edward Chang · Michael W Mahoney · Sharmodeep Bhattacharya