Skip to yearly menu bar Skip to main content


Poster

On Convergence and Generalization of Dropout Training

Poorya Mianjy · Raman Arora

Poster Session 1 #472

Abstract: We study dropout in two-layer neural networks with rectified linear unit (ReLU) activations. Under mild overparametrization and assuming that the limiting kernel can separate the data distribution with a positive margin, we show that the dropout training with logistic loss achieves $\epsilon$-suboptimality in the test error in $O(1/\epsilon)$ iterations.

Chat is not available.