We extend generative stochastic networks to supervised learning of representations. In particular, we introduce a hybrid training objective considering a generative and discriminative cost function governed by a trade-off parameter lambda. We use a new variant of network training involving noise injection, i.e. walkback training, to jointly optimize multiple network layers. Neither additional regularization constraints, such as l1, l2 norms or dropout variants, nor pooling- or convolutional layers were added. Nevertheless, we are able to obtain state-of-the-art performance on the MNIST dataset, without using permutation invariant digits and outperform baseline models on sub-variants of the MNIST and rectangles dataset significantly.
Matthias Zöhrer (Graz University of Technology)
Franz Pernkopf (Signal Processing and Speech Communication Laboratory, Graz, Austria)
More from the Same Authors
2021 : Distribution Mismatch Correction for Improved Robustness in Deep Neural Networks »
Alexander Fuchs · Christian Knoll · Franz Pernkopf
2019 Poster: Bayesian Learning of Sum-Product Networks »
Martin Trapp · Robert Peharz · Hong Ge · Franz Pernkopf · Zoubin Ghahramani