Timezone: »
Spotlight
Graph-based Discriminators: Sample Complexity and Expressiveness
Roi Livni · Yishay Mansour
A basic question in learning theory is to identify if two
distributions are identical when we have access only to examples sampled from the distributions.
This basic task is considered, for example, in the context of
Generative Adversarial Networks (GANs), where a discriminator is trained to distinguish between a real-life distribution and a synthetic distribution.
Classically, we use a hypothesis class $H$ and claim that the two
distributions are distinct if for some $h\in H$ the expected value
on the two distributions is (significantly) different.
Our starting point is the following fundamental problem: "is having
the hypothesis dependent on more than a single random example
beneficial". To address this challenge we define $k$-ary based
discriminators, which have a family of Boolean $k$-ary functions
$\G$. Each function $g\in \G$ naturally defines a hyper-graph,
indicating whether a given hyper-edge exists. A function $g\in \G$
distinguishes between two distributions, if the expected value of
$g$, on a $k$-tuple of i.i.d examples, on the two distributions is
(significantly) different.
We study the expressiveness of families of $k$-ary functions,
compared to the classical hypothesis class $H$, which is $k=1$. We
show a separation in expressiveness of $k+1$-ary versus $k$-ary
functions. This demonstrate the great benefit of having $k\geq 2$ as
distinguishers.
For $k\geq 2$ we introduce a notion similar to the VC-dimension, and
show that it controls the sample complexity. We proceed and provide upper and
lower bounds as a function of our extended notion of VC-dimension.
Author Information
Roi Livni (Tel Aviv University)
Yishay Mansour (Tel Aviv University / Google)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Graph-based Discriminators: Sample Complexity and Expressiveness »
Thu Dec 12th 01:00 -- 03:00 AM Room East Exhibition Hall B + C
More from the Same Authors
-
2020 Poster: Sample Complexity of Uniform Convergence for Multicalibration »
Eliran Shabat · Lee Cohen · Yishay Mansour -
2020 Poster: Prediction with Corrupted Expert Advice »
Idan Amir · Idan Attias · Tomer Koren · Yishay Mansour · Roi Livni -
2020 Spotlight: Prediction with Corrupted Expert Advice »
Idan Amir · Idan Attias · Tomer Koren · Yishay Mansour · Roi Livni -
2020 Poster: Adversarially Robust Streaming Algorithms via Differential Privacy »
Avinatan Hasidim · Haim Kaplan · Yishay Mansour · Yossi Matias · Uri Stemmer -
2020 Poster: Private Learning of Halfspaces: Simplifying the Construction and Reducing the Sample Complexity »
Haim Kaplan · Yishay Mansour · Uri Stemmer · Eliad Tsfadia -
2020 Poster: Synthetic Data Generators -- Sequential and Private »
Olivier Bousquet · Roi Livni · Shay Moran -
2020 Poster: A Limitation of the PAC-Bayes Framework »
Roi Livni · Shay Moran -
2020 Oral: Adversarially Robust Streaming Algorithms via Differential Privacy »
Avinatan Hasidim · Haim Kaplan · Yishay Mansour · Yossi Matias · Uri Stemmer -
2019 Poster: Online Stochastic Shortest Path with Bandit Feedback and Unknown Transition Function »
Aviv Rosenberg · Yishay Mansour -
2019 Poster: Learning to Screen »
Alon Cohen · Avinatan Hassidim · Haim Kaplan · Yishay Mansour · Shay Moran -
2019 Poster: Individual Regret in Cooperative Nonstochastic Multi-Armed Bandits »
Yogev Bar-On · Yishay Mansour -
2016 Poster: Online Pricing with Strategic and Patient Buyers »
Michal Feldman · Tomer Koren · Roi Livni · Yishay Mansour · Aviv Zohar -
2013 Poster: From Bandits to Experts: A Tale of Domination and Independence »
Noga Alon · Nicolò Cesa-Bianchi · Claudio Gentile · Yishay Mansour -
2013 Oral: From Bandits to Experts: A Tale of Domination and Independence »
Noga Alon · Nicolò Cesa-Bianchi · Claudio Gentile · Yishay Mansour