Skip to yearly menu bar Skip to main content


Poster

Personalized Dictionary Learning for Heterogeneous Datasets

Geyu Liang · Naichen Shi · Raed AL Kontar · Salar Fattahi

Great Hall & Hall B1+B2 (level 1) #1222
[ ]
[ Paper [ Poster [ OpenReview
Thu 14 Dec 3 p.m. PST — 5 p.m. PST

Abstract:

We introduce a relevant yet challenging problem named Personalized Dictionary Learning (PerDL), where the goal is to learn sparse linear representations from heterogeneous datasets that share some commonality. In PerDL, we model each dataset's shared and unique features as global and local dictionaries. Challenges for PerDL not only are inherited from classical dictionary learning(DL), but also arise due to the unknown nature of the shared and unique features. In this paper, we rigorously formulate this problem and provide conditions under which the global and local dictionaries can be provably disentangled. Under these conditions, we provide a meta-algorithm called Personalized Matching and Averaging (PerMA) that can recover both global and local dictionaries from heterogeneous datasets. PerMA is highly efficient; it converges to the ground truth at a linear rate under suitable conditions. Moreover, it automatically borrows strength from strong learners to improve the prediction of weak learners. As a general framework for extracting global and local dictionaries, we show the application of PerDL in different learning tasks, such as training with imbalanced datasets and video surveillance.

Chat is not available.