Skip to yearly menu bar Skip to main content


Poster

Modular Networks: Learning to Decompose Neural Computation

Louis Kirsch · Julius Kunze · David Barber

Room 210 #62

Keywords: [ Variational Inference ] [ Recurrent Networks ] [ Supervised Deep Networks ] [ Representation Learning ] [ Latent Variable Models ] [ Optimization for Deep Networks ]


Abstract:

Scaling model capacity has been vital in the success of deep learning. For a typical network, necessary compute resources and training time grow dramatically with model size. Conditional computation is a promising way to increase the number of parameters with a relatively small increase in resources. We propose a training algorithm that flexibly chooses neural modules based on the data to be processed. Both the decomposition and modules are learned end-to-end. In contrast to existing approaches, training does not rely on regularization to enforce diversity in module use. We apply modular networks both to image recognition and language modeling tasks, where we achieve superior performance compared to several baselines. Introspection reveals that modules specialize in interpretable contexts.

Live content is unavailable. Log in and register to view live content