Timezone: »

 
Poster
On Convergence and Generalization of Dropout Training
Poorya Mianjy · Raman Arora

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #472
We study dropout in two-layer neural networks with rectified linear unit (ReLU) activations. Under mild overparametrization and assuming that the limiting kernel can separate the data distribution with a positive margin, we show that the dropout training with logistic loss achieves $\epsilon$-suboptimality in the test error in $O(1/\epsilon)$ iterations.

Author Information

Poorya Mianjy (Johns Hopkins University)
Raman Arora (Johns Hopkins University)

More from the Same Authors