Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 4th Workshop on Self-Supervised Learning: Theory and Practice

Learning Useful Representations of Recurrent Neural Network Weight Matrices

Vincent Herrmann · Francesco Faccio · Jürgen Schmidhuber


Abstract:

Recurrent Neural Networks (RNNs) are general-purpose parallel-sequential computers. The program of an RNN is its weight matrix. How to learn useful representations of RNN weights that facilitate RNN analysis as well as downstream tasks? While the "mechanistic approach" directly looks at some RNN's weights to predict its behavior, the "functionalist approach'" analyzes its overall functionality---specifically, its input-output mapping. Our two novel functionalist approaches extract information from RNN weights by 'interrogating' the RNN through probing inputs. Our novel theoretical framework for the functionalist approach demonstrates conditions under which it can generate rich representations that help determine RNN behavior. RNN weight representations generated by mechanistic and functionalist approaches are compared by evaluating them in two downstream tasks. Our results show the superiority of functionalist methods.

Chat is not available.