Timezone: »

 
Spotlight
Sampling from a k-DPP without looking at all items
Daniele Calandriello · Michal Derezinski · Michal Valko

Thu Dec 10 07:00 AM -- 07:10 AM (PST) @ Orals & Spotlights: Probabilistic Models/Statistics

Determinantal point processes (DPPs) are a useful probabilistic model for selecting a small diverse subset out of a large collection of items, with applications in summarization, recommendation, stochastic optimization, experimental design and more. Given a kernel function and a subset size k, our goal is to sample k out of n items with probability proportional to the determinant of the kernel matrix induced by the subset (a.k.a. k-DPP). Existing k-DPP sampling algorithms require an expensive preprocessing step which involves multiple passes over all n items, making it infeasible for large datasets. A naïve heuristic addressing this problem is to uniformly subsample a fraction of the data and perform k-DPP sampling only on those items, however this method offers no guarantee that the produced sample will even approximately resemble the target distribution over the original dataset. In this paper, we develop alpha-DPP, an algorithm which adaptively builds a sufficiently large uniform sample of data that is then used to efficiently generate a smaller set of k items, while ensuring that this set is drawn exactly from the target distribution defined on all n items. We show empirically that our algorithm produces a k-DPP sample after observing only a small fraction of all elements, leading to several orders of magnitude faster performance compared to the state-of-the-art. Our implementation of alpha-DPP is provided at https://github.com/guilgautier/DPPy/.

Author Information

Daniele Calandriello (DeepMind)
Michal Derezinski (UC Berkeley)
Michal Valko (DeepMind)
Michal Valko

Michal is a machine learning scientist in DeepMind Paris, tenured researcher at Inria, and the lecturer of the master course Graphs in Machine Learning at l'ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. This means 1) reducing the “intelligence” that humans need to input into the system and 2) minimizing the data that humans need to spend inspecting, classifying, or “tuning” the algorithms. That is why he is working on methods and settings that are able to deal with minimal feedback, such as deep reinforcement learning, bandit algorithms, or self-supervised learning. Michal is actively working on represenation learning and building worlds models. He is also working on deep (reinforcement) learning algorithm that have some theoretical underpinning. He has also worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. He received his Ph.D. in 2011 from the University of Pittsburgh under the supervision of Miloš Hauskrecht and after was a postdoc of Rémi Munos before taking a permanent position at Inria in 2012.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors