Timezone: »
Time series analysis is a widespread task in Natural Sciences, Social Sciences and Engineering. A fundamental problem is finding an expressive yet efficient-to-compute representation of the input time series to use as a starting point to perform arbitrary downstream tasks. In this paper, we build upon recent work using the signature of a path as a feature map and investigate a computationally efficient technique to approximate these features based on linear random projections. We present several theoretical results to justify our approach, we analyze and showcase its empirical performance on the task of learning a mapping between the input controls of a Stochastic Differential Equation (SDE) and its corresponding solution. Our results show that the representational power of the proposed random features allows to efficiently learn the aforementioned mapping.
Author Information
Enea Monzio Compagnoni (Swiss Federal Institute of Technology)
Luca Biggio (ETH Zürich)
Antonio Orvieto (ETH Zurich)
PhD Student at ETH Zurich. I’m interested in the design and analysis of optimization algorithms for deep learning. Interned at DeepMind, MILA, and Meta. All publications at http://orvi.altervista.org/ Looking for postdoc positions! :) antonio.orvieto@inf.ethz.ch
More from the Same Authors
-
2020 : Poster #12 »
Luca Biggio -
2020 : Uncertainty-aware Remaining Useful Life predictors »
Luca Biggio · Manuel Arias Chao · Olga Fink -
2021 : Differentiable Strong Lensing for Complex Lens Modelling »
Luca Biggio -
2022 : Batch size selection by stochastic optimal contro »
Jim Zhao · Aurelien Lucchi · Frank Proske · Antonio Orvieto · Hans Kersting -
2022 : Fast kinematics modeling for conjunction with lens image modeling »
Matthew Gomer · Luca Biggio · Sebastian Ertl · Han Wang · Aymeric Galan · Lyne Van de Vyvere · Dominique Sluse · Georgios Vernardos · Sherry Suyu -
2022 : Cosmology from Galaxy Redshift Surveys with PointNet »
Sotiris Anagnostidis · Arne Thomsen · Alexandre Refregier · Tomasz Kacprzak · Luca Biggio · Thomas Hofmann · Tilman Tröster -
2022 : Privileged Deep Symbolic Regression »
Luca Biggio · Tommaso Bendinelli · Pierre-alexandre Kamienny -
2022 : Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning »
Sanghwan Kim · Lorenzo Noci · Antonio Orvieto · Thomas Hofmann -
2023 Poster: Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers »
Sotiris Anagnostidis · Dario Pavllo · Luca Biggio · Lorenzo Noci · Aurelien Lucchi · Thomas Hofmann -
2022 Poster: On the Theoretical Properties of Noise Correlation in Stochastic Optimization »
Aurelien Lucchi · Frank Proske · Antonio Orvieto · Francis Bach · Hans Kersting -
2022 Poster: Signal Propagation in Transformers: Theoretical Perspectives and the Role of Rank Collapse »
Lorenzo Noci · Sotiris Anagnostidis · Luca Biggio · Antonio Orvieto · Sidak Pal Singh · Aurelien Lucchi -
2022 Poster: Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution »
Antonio Orvieto · Simon Lacoste-Julien · Nicolas Loizou -
2021 Poster: Rethinking the Variational Interpretation of Accelerated Optimization Methods »
Peiyuan Zhang · Antonio Orvieto · Hadi Daneshmand -
2021 Poster: On the Second-order Convergence Properties of Random Search Methods »
Aurelien Lucchi · Antonio Orvieto · Adamos Solomou -
2019 Poster: Shadowing Properties of Optimization Algorithms »
Antonio Orvieto · Aurelien Lucchi -
2019 Poster: Continuous-time Models for Stochastic Optimization Algorithms »
Antonio Orvieto · Aurelien Lucchi