Self-Supervised learning without contrastive pairs has shown huge success in the recent year. However, understanding why these networks do not collapse despite not using contrastive pairs was not fully understood until very recently. In this work we re-implemented the architectures and pre-training schemes of SimSiam, BYOL, DirectPred and DirectCopy. We investigated the eigenspace alignment hypothesis in DirectPred, by plotting the eigenvalues and eigenspace alignments for both SimSiam and BYOL with and without Symmetric regularization. We also combine the framework of DirectPred with SimCLRv2 in order to explore if any further improvements could be made. We managed to achieve comparable results to the paper of DirectPred in regards to accuracy and the behaviour of symmetry and eigenspace alignment.
Tobias Höppe (KTH Stockholm)
Agnieszka Miszkurka (KTH Royal Institute of Science)
A recent graduate from the master's program in Machine Learning at KTH Royal Institute of Technology. Soon starting as a Software Engineer at Google Zurich.
Dennis Bogatov Wilkman
More from the Same Authors
2022 : Diffusion Models for Video Prediction and Infilling »
Tobias Höppe · Arash Mehrjou · Stefan Bauer · Didrik Nielsen · Andrea Dittadi
2022 Spotlight: [Re] Understanding Self-Supervised Learning Dynamics without Contrastive Pairs »
Tobias Höppe · Agnieszka Miszkurka · Dennis Bogatov Wilkman