Timezone: »
We consider the problem of estimating a linear time-invariant (LTI) dynamical system from a single trajectory via streaming algorithms, which is encountered in several applications including reinforcement learning (RL) and time-series analysis. While the LTI system estimation problem is well-studied in the {\em offline} setting, the practically important streaming/online setting has received little attention. Standard streaming methods like stochastic gradient descent (SGD) are unlikely to work since streaming points can be highly correlated. In this work, we propose a novel streaming algorithm, SGD with Reverse Experience Replay (SGD-RER), that is inspired by the experience replay (ER) technique popular in the RL literature. SGD-RER divides data into small buffers and runs SGD backwards on the data stored in the individual buffers. We show that this algorithm exactly deconstructs the dependency structure and obtains information theoretically optimal guarantees for both parameter error and prediction error. Thus, we provide the first -- to the best of our knowledge -- optimal SGD-style algorithm for the classical problem of linear system identification with a first order oracle. Furthermore, SGD-RER can be applied to more general settings like sparse LTI identification with known sparsity pattern, and non-linear dynamical systems. Our work demonstrates that the knowledge of data dependency structure can aid us in designing statistically and computationally efficient algorithms which can ``decorrelate'' streaming samples.
Author Information
Suhas Kowshik (Massachusetts Institute of Technology)
Dheeraj Nagaraj (Massachusetts Institute of Technology)
Prateek Jain (Google Research)
Praneeth Netrapalli (Google)
More from the Same Authors
-
2021 Spotlight: Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems »
Suhas Kowshik · Dheeraj Nagaraj · Prateek Jain · Praneeth Netrapalli -
2021 Spotlight: Differentially Private Model Personalization »
Prateek Jain · John Rush · Adam Smith · Shuang Song · Abhradeep Guha Thakurta -
2021 Spotlight: Near-Optimal Lower Bounds For Convex Optimization For All Orders of Smoothness »
Ankit Garg · Robin Kothari · Praneeth Netrapalli · Suhail Sherif -
2022 : MET: Masked Encoding for Tabular Data »
Kushal Majmundar · Sachin Goyal · Praneeth Netrapalli · Prateek Jain -
2022 : Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks »
Sravanti Addepalli · Anshul Nasery · Venkatesh Babu R · Praneeth Netrapalli · Prateek Jain -
2022 Poster: DP-PCA: Statistically Optimal and Differentially Private PCA »
Xiyang Liu · Weihao Kong · Prateek Jain · Sewoong Oh -
2022 Poster: S3GC: Scalable Self-Supervised Graph Clustering »
Fnu Devvrit · Aditya Sinha · Inderjit Dhillon · Prateek Jain -
2022 Poster: Reproducibility in Optimization: Theoretical Framework and Limits »
Kwangjun Ahn · Prateek Jain · Ziwei Ji · Satyen Kale · Praneeth Netrapalli · Gil I Shamir -
2022 Poster: Matryoshka Representation Learning »
Aditya Kusupati · Gantavya Bhatt · Aniket Rege · Matthew Wallingford · Aditya Sinha · Vivek Ramanujan · William Howard-Snyder · Kaifeng Chen · Sham Kakade · Prateek Jain · Ali Farhadi -
2021 Poster: Differentially Private Model Personalization »
Prateek Jain · John Rush · Adam Smith · Shuang Song · Abhradeep Guha Thakurta -
2021 Poster: LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes »
Aditya Kusupati · Matthew Wallingford · Vivek Ramanujan · Raghav Somani · Jae Sung Park · Krishna Pillutla · Prateek Jain · Sham Kakade · Ali Farhadi -
2021 Poster: The staircase property: How hierarchical structure can guide deep learning »
Emmanuel Abbe · Enric Boix-Adsera · Matthew S Brennan · Guy Bresler · Dheeraj Nagaraj -
2021 Poster: Do Input Gradients Highlight Discriminative Features? »
Harshay Shah · Prateek Jain · Praneeth Netrapalli -
2021 Poster: Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems »
Suhas Kowshik · Dheeraj Nagaraj · Prateek Jain · Praneeth Netrapalli -
2021 Poster: Statistically and Computationally Efficient Linear Meta-representation Learning »
Kiran Thekumparampil · Prateek Jain · Praneeth Netrapalli · Sewoong Oh -
2021 Poster: Near-Optimal Lower Bounds For Convex Optimization For All Orders of Smoothness »
Ankit Garg · Robin Kothari · Praneeth Netrapalli · Suhail Sherif -
2020 Poster: Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method »
Kiran Thekumparampil · Prateek Jain · Praneeth Netrapalli · Sewoong Oh -
2020 Spotlight: Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method »
Kiran Thekumparampil · Prateek Jain · Praneeth Netrapalli · Sewoong Oh -
2020 Poster: Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth »
Guy Bresler · Dheeraj Nagaraj -
2020 Poster: RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference »
Oindrila Saha · Aditya Kusupati · Harsha Vardhan Simhadri · Manik Varma · Prateek Jain -
2020 Spotlight: RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference »
Oindrila Saha · Aditya Kusupati · Harsha Vardhan Simhadri · Manik Varma · Prateek Jain -
2020 Poster: The Pitfalls of Simplicity Bias in Neural Networks »
Harshay Shah · Kaustav Tamuly · Aditi Raghunathan · Prateek Jain · Praneeth Netrapalli -
2020 Poster: Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms »
Dheeraj Nagaraj · Xian Wu · Guy Bresler · Prateek Jain · Praneeth Netrapalli -
2020 Spotlight: Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms »
Dheeraj Nagaraj · Xian Wu · Guy Bresler · Prateek Jain · Praneeth Netrapalli