Expo Demonstration
La Nouvelle Orleans Ballroom A-C (level 2)

While Foundation Models (FM) have greatly transformed AI solutions for language and vision, they often fall short in addressing time-series data, which is widely used in various industries. At IBM Research, our dedicated team focuses exclusively on advancing Time Series Foundation Models and has made significant contributions with several influential papers presented at top AI conferences. Our team has been pioneers in this space where we defined the first inaugural architecture for several popular Time-series FM backbones, including the first transformer for multi-variate time-series representation learning (TST, KDD 21), the first patched time-series transformer (PatchTST, ICLR 23) and the first patched MLP-Mixer for time series (PatchTSMixer, KDD 23). Our latest Models (PatchTST and PatchTSMixer) are the leading SOTAs in this space with a significant reduction (2-3X) in compute and memory requirements. We have released our SOTA models through various open-source channels attracting strong community engagement and faster adoption of our models in famous time-series libraries like GluonTS, NeuralForecast, timeseriesAI(tsai) and HuggingFace. In this session, we would like to provide a demo of our SOTA models to a larger scientific community and also showcase interesting applications in diverse industrial settings across electricity, weather, traffic, retail, etc. Through illustrative notebooks and demos, we plan to discuss the best practices and the impact of various modeling approaches, design choices and hyper-parameters that affect the performance across datasets and use cases from different industries. We will also provide insights on the various pretraining and finetuning workflow templates that we have standardized for various industrial settings to quickly get started. This demo session will be hands-on using our open-source libraries and associated code artifacts will be released for wider use.

Chat is not available.