Timezone: »
Self-supervised learning methods have witnessed a recent surge of interest after proving successful in multiple application fields. In this work, we leverage these techniques, and we propose 3D versions for five different self-supervised methods, in the form of proxy tasks. Our methods facilitate neural network feature learning from unlabeled 3D images, aiming to reduce the required cost for expert annotation. The developed algorithms are 3D Contrastive Predictive Coding, 3D Rotation prediction, 3D Jigsaw puzzles, Relative 3D patch location, and 3D Exemplar networks. Our experiments show that pretraining models with our 3D tasks yields more powerful semantic representations, and enables solving downstream tasks more accurately and efficiently, compared to training the models from scratch and to pretraining them on 2D slices. We demonstrate the effectiveness of our methods on three downstream tasks from the medical imaging domain: i) Brain Tumor Segmentation from 3D MRI, ii) Pancreas Tumor Segmentation from 3D CT, and iii) Diabetic Retinopathy Detection from 2D Fundus images. In each task, we assess the gains in data-efficiency, performance, and speed of convergence. Interestingly, we also find gains when transferring the learned representations, by our methods, from a large unlabeled 3D corpus to a small downstream-specific dataset. We achieve results competitive to state-of-the-art solutions at a fraction of the computational expense. We publish our implementations for the developed algorithms (both 3D and 2D versions) as an open-source library, in an effort to allow other researchers to apply and extend our methods on their datasets.
Author Information
Aiham Taleb (Hasso-Plattner-Institute, Potsdam University)
Winfried Loetzsch (Hasso Plattner Institute)
I am about to complete my master's degree in computer science at the Hasso-Plattner-Institute in Potsdam with the main focus on machine learning. I have most of all worked with self-supervised techniques for representation learning on images and volumetric data, natural language processing, and deep reinforcement learning. Recently, I started researching deep learning-based image compression.
Noel Danz (HPI)
Julius Severin (Hasso Plattner Institute)
Thomas Gaertner (HPI)
Benjamin Bergner (HPI)
Christoph Lippert (Hasso Plattner Institute for Digital Engineering, Universität Potsdam)
More from the Same Authors
-
2022 : HAPNEST: An efficient tool for generating large-scale genetics datasets from limited training data »
Sophie Wharrie · Zhiyu Yang · Vishnu Raj · Remo Monti · Rahul Gupta · Ying Wang · Alicia Martin · Luke O'Connor · Samuel Kaski · Pekka Marttinen · Pier Palamara · Christoph Lippert · Andrea Ganna -
2020 Poster: Beyond the Mean-Field: Structured Deep Gaussian Processes Improve the Predictive Uncertainties »
Jakob Lindinger · David Reeb · Christoph Lippert · Barbara Rakitsch -
2019 : Oral Session II – Image Analysis and Segmentation »
Aiham Taleb · Khashayar Namdar · Matthias Perkonigg · Shizhan Gong -
2019 : Coffee Break + Poster Session I »
Wei-Hung Weng · Simon Kohl · Aiham Taleb · Arijit Patra · Khashayar Namdar · Matthias Perkonigg · Shizhan Gong · Abdullah-Al-Zubaer Imran · Amir Abdi · Ilja Manakov · Johannes C. Paetzold · Ben Glocker · Dushyant Sahoo · Shreyas Fadnavis · Karsten Roth · Xueqing Liu · Yifan Zhang · Alexander Preuhs · Fabian Eitel · Anusua Trivedi · Tomer Weiss · Darko Stern · Liset Vazquez Romaguera · Johannes Hofmanninger · Aakash Kaku · Oloruntobiloba Olatunji · Anastasia Razdaibiedina · Tao Zhang -
2013 Poster: It is all in the noise: Efficient multi-task Gaussian process inference with structured residuals »
Barbara Rakitsch · Christoph Lippert · Karsten Borgwardt · Oliver Stegle -
2011 Poster: Learning sparse inverse covariance matrices in the presence of confounders »
Oliver Stegle · Christoph Lippert · Joris M Mooij · Neil D Lawrence · Karsten Borgwardt