Skip to yearly menu bar Skip to main content


Poster

Universal Neural Functionals

Allan Zhou · Chelsea Finn · James Harrison

East Exhibit Hall A-C #4706
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

A challenging problem in many modern machine learning tasks is to process weight-space features, i.e., to transform or extract information from the weights and gradients of a neural network. Recent works have developed promising weight-space models that are equivariant to the permutation symmetries of simple feedforward networks. However, they are not applicable to general architectures, since the permutation symmetries of a weight space can be complicated by recurrence or residual connections. This work proposes an algorithm that automatically constructs permutation equivariant models, which we refer to as universal neural functionals (UNFs), for any weight space. Among other applications, we demonstrate how UNFs can be substituted into existing learned optimizer designs, and find promising improvements over prior methods when optimizing small image classifiers and language models. Our results suggest that learned optimizers can benefit from considering the (symmetry) structure of the weight space they optimize.

Live content is unavailable. Log in and register to view live content