Timezone: »

 
Poster
Learning a 1-layer conditional generative model in total variation
Ajil Jalal · Justin Kang · Ananya Uppal · Kannan Ramchandran · Eric Price

Thu Dec 14 03:00 PM -- 05:00 PM (PST) @ Great Hall & Hall B1+B2 #1009
A conditional generative model is a method for sampling from a conditional distribution $p(y \mid x)$. For example, one may want to sample an image of a cat given the label ``cat''. A feed-forward conditional generative model is a function $g(x, z)$ that takes the input $x$ and a random seed $z$, and outputs a sample $y$ from $p(y \mid x)$. Ideally the distribution of outputs $(x, g(x, z))$ would be close in total variation to the ideal distribution $(x, y)$.Generalization bounds for other learning models require assumptions on the distribution of $x$, even in simple settings like linear regression with Gaussian noise. We show these assumptions are unnecessary in our model, for both linear regression and single-layer ReLU networks. Given samples $(x, y)$, we show how to learn a 1-layer ReLU conditional generative model in total variation. As our result has no assumption on the distribution of inputs $x$, if we are given access to the internal activations of a deep generative model, we can compose our 1-layer guarantee to progressively learn the deep model using a near-linear number of samples.

Author Information

Ajil Jalal (UC Berkeley)
Justin Kang (University of California, Berkeley)
Ananya Uppal (University of Texas at Austin)
Kannan Ramchandran (UC Berkeley)
Eric Price (University of Texas at Austin)

More from the Same Authors