Skip to yearly menu bar Skip to main content


Adapting to Continuous Covariate Shift via Online Density Ratio Estimation

Yu-Jie Zhang · Zhen-Yu Zhang · Peng Zhao · Masashi Sugiyama

Great Hall & Hall B1+B2 (level 1) #927
[ ]
Wed 13 Dec 3 p.m. PST — 5 p.m. PST


Dealing with distribution shifts is one of the central challenges for modern machine learning. One fundamental situation is the covariate shift, where the input distributions of data change from the training to testing stages while the input-conditional output distribution remains unchanged. In this paper, we initiate the study of a more challenging scenario --- continuous covariate shift --- in which the test data appear sequentially, and their distributions can shift continuously. Our goal is to adaptively train the predictor such that its prediction risk accumulated over time can be minimized. Starting with the importance-weighted learning, we theoretically show the method works effectively if the time-varying density ratios of test and train inputs can be accurately estimated. However, existing density ratio estimation methods would fail due to data scarcity at each time step. To this end, we propose an online density ratio estimation method that can appropriately reuse historical information. Our method is proven to perform well by enjoying a dynamic regret bound, which finally leads to an excess risk guarantee for the predictor. Empirical results also validate the effectiveness.

Chat is not available.