Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Symmetry and Geometry in Neural Representations

Almost Equivariance via Lie Algebra Convolutions

Daniel McNeela


Abstract: Recently, the $\textit{equivariance}$ of models with respect to a group action hasbecome an important topic of research in machine learning. Analysis of the built-in equivariance ofexisting neural network architectures, as well as the study of methods for building model architectures that explicitly ``bake in'' equivariance, have become significant research areas in their own right.However, imbuing an architecture with a specific group equivariance imposes a strong prior on the types of data transformations that the model expects to see. While strictly-equivariant models enforce symmetries, suchas those due to rotations or translations, real-world data does not always follow such strict equivariances,be it due to noise in the data or underlying physical laws that encode only approximate or partial symmetries.In such cases, the prior of strict equivariance can actually prove too strong and cause models to underperform on real-world data. Therefore, in this work we study a closely related topic, that of $\textit{almost equivariance}$. We give a practical method for encodingalmost equivariance in models by appealing to the Lie algebra of a Lie group and defining $\textit{Lie algebra convolutions}$.We demonstrate that Lie algebra convolutions offer several benefits over Lie group convolutions, including being computationally tractable and well-defined for non-compact groups.Finally, we demonstrate the validity of our approach by benchmarking against datasets in fully equivariant and almost equivariant settings.

Chat is not available.