Do We Really Need Another Time-Series Forecasting Model?
Abstract
Time‑series foundation models promise a single solution for many domains, but often rely on transformer architectures that may not always be suitable for latency‑sensitive or specialized use cases. Meanwhile, lighter recurrent and state‑space models like TiRex, our xLSTM‑Mixer, and FlowState show that simplified architectures can rival or surpass heavy transformers in both accuracy and efficiency. Benchmarks such as QuAnTS highlight the need for capabilities beyond forecasting—including question answering and reasoning on time series—underlining the need for purpose‑built models. This talk aims to give an introduction to xLSTM-Mixer and motivate discussion about whether general forecasting still depends on traditional supervised training and how we must refine our methods to fit the specific demands of each domain.