Poster
DppNet: Approximating Determinantal Point Processes with Deep Networks
Zelda Mariet · Yaniv Ovadia · Jasper Snoek

Tue Dec 10th 10:45 AM -- 12:45 PM @ East Exhibition Hall B + C #123

Determinantal point processes (DPPs) provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items. For this reason, they have gained prominence in many machine learning applications that rely on subset selection. However, sampling from a DPP over a ground set of size N is a costly operation, requiring in general an O(N^3) preprocessing cost and an O(Nk^3) sampling cost for subsets of size k. We approach this problem by introducing DppNets: generative deep models that produce DPP-like samples for arbitrary ground sets. We develop an inhibitive attention mechanism based on transformer networks that captures a notion of dissimilarity between feature vectors. We show theoretically that such an approximation is sensible as it maintains the guarantees of inhibition or dissimilarity that makes DPPs so powerful and unique. Empirically, we show across multiple datasets that DPPNET is orders of magnitude faster than competing approaches for DPP sampling, while generating high-likelihood samples and performing as well as DPPs on downstream tasks.

Author Information

Zelda Mariet (Google AI)
Yaniv Ovadia (Princeton University)
Jasper Snoek (Google Brain)

More from the Same Authors