Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Symmetry and Geometry in Neural Representations (NeurReps)

Barron's Theorem for Equivariant Networks

Hannah Lawrence

Keywords: [ Equivariance ] [ Invariance ] [ Approximation ] [ universality ] [ Symmetry ] [ Barron ]


Abstract:

The incorporation of known symmetries in a learning task provides a powerful inductive bias, reducing the sample complexity of learning equivariant functions in both theory and practice. Group-symmetric architectures for equivariant deep learning are now widespread, as are accompanying universality results that verify their representational power. However, these symmetric approximation theorems suffer from the same major drawback as their original non-symmetric counterparts: namely, they may require impractically large networks. In this work, we demonstrate that for some commonly used groups, there exist smooth subclasses of functions -- analogous to Barron classes of functions -- which can be efficiently approximated using invariant architectures. In particular, for permutation subgroups, there exist invariant approximating architectures whose sizes, while dependent on the precise orbit structure of the function, are in many cases just as small as the non-invariant architectures given by Barron's Theorem. For the rotation group, we define an invariant approximating architecture with a new invariant nonlinearity, which may be of independent practical interest, that is similarly just as small as its non-invariant counterparts. Overall, we view this work as a first step towards capturing the smoothness of invariant functions in invariant universal approximation, thereby providing approximation results that are not only invariant, but efficient.

Chat is not available.