Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Symmetry and Geometry in Neural Representations

Data Augmentations in Deep Weight Spaces

Aviv Shamsian · David Zhang · Aviv Navon · Yan Zhang · Miltiadis (Miltos) Kofinas · Idan Achituve · Riccardo Valperga · Gertjan Burghouts · Efstratios Gavves · Cees Snoek · Ethan Fetaya · Gal Chechik · Haggai Maron


Abstract:

Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization. Recent works designed architectures for effective learning in that space, which takes into account its unique, permutation-equivariant, structure. Unfortunately, so far these architectures suffer from severe overfitting and were shown to benefit from large datasets. This poses a significant challenge because generating data for this learning setup is laborious and time-consuming since each data sample is a full set of network weights that has to be trained. In this paper, we address this difficulty by investigating data augmentations for weight spaces, a set of techniques that enable generating new data examples on the fly without having to train additional input weight space elements. We first review several recently proposed data augmentation schemes and divide them into categories. We then introduce a novel augmentation scheme based on the Mixup method. We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate, which can be valuable for future studies.

Chat is not available.