We introduce a technique based on the singular vector canonical correlation analysis (SVCCA) for measuring the generality of neural network layers across a continuously-parametrized set of tasks. We illustrate this method by studying generality in neural networks trained to solve parametrized boundary value problems based on the Poisson partial differential equation. We find that the first hidden layers are general, and that they learn generalized coordinates over the input domain. Deeper layers are successively more specific. Next, we validate our method against an existing technique that measures layer generality using transfer learning experiments. We find excellent agreement between the two methods, and note that our method is much faster, particularly for continuously-parametrized problems. Finally, we also apply our method to networks trained on MNIST, and show it is consistent with, and complimentary to, another study of intrinsic dimensionality.
Martin Magill (University of Ontario Institute of Technology)
I am a PhD student in modelling and computational science under the supervision of Dr. Hendrick de Haan in the cNAB.LAB for computational nanobiophysics. Recently, I’ve been interested in using deep neural networks to solve the partial differential equations that describe electric fields and molecular transport through nanofluidic devices. I’ve also been using these mathematical problems as a controlled setting in which to study deep neural networks themselves.