Invited Talk -- Christopher Kanan (University of Rochester) -- Title: Continual Learning Beyond Forgetting: Updating Foundation Models Efficiently
Abstract
Continual learning must evolve to support lifelong foundation models. Classical continual learning optimized the wrong objective by focusing on catastrophic forgetting under unrealistic storage constraints. In contrast, modern foundation models are limited by compute, not memory, and require update strategies that maximize retention and forward transfer per unit of computation. I will present a framework for compute-bounded replay and recent methods from my lab, which enable efficient updates for large pretrained models. I will also discuss implications for multimodal models, out-of-distribution generalization, and the long-term goal of synthetic minds that acquire and consolidate knowledge over time. Together, these results outline a path toward scalable continual learning as the default training paradigm for foundation models.