Skip to yearly menu bar Skip to main content



Abstract:

Convolutional neural networks have proven very successful for a wide range of modelling tasks. Convolutional layers embed equivariance to discrete translations into the architectural structure neural networks. Recent extensions generalize this notion to continuous Lie groups beyond translation, such as rotation, scale or more complex symmetries. Another recent generalization of the convolution has allowed for relaxed equivariance constraints, which can be to model data that does not fully respect symmetries while still leveraging on useful inductive biases that equivariances provide. Unlike simple grids for regular convolution over the translational group, sampling convolutional filters on Lie groups requires filters that are continuously parameterised. To parameterise sufficiently flexible continuous filters, small MLP hypernetworks are often used in practice. Although this works, it introduces many additional model parameters. To be more parameter-efficient, we propose an alternative approach defining continuous filters on Lie groups with a small finite set of basis functions through pseudo-points. Regular convolutional layers appear as a special case, allowing for practical conversion between regular filters and our basis function filter formulation, at equal memory complexity. We demonstrate that basis function filters can be used to create efficient equivariant and relaxed-equivariant versions of commonly used neural network architectures, outperforming baselines on CIFAR-10 and CIFAR-100 vision classification tasks.

Chat is not available.