Automated segmentation of fundus photographs would improve the quality, capacity, and cost-effectiveness of eye care screening programs. However, current segmentation methods are not robust towards the diversity of images typical for clinical applications. To overcome this, we used contrastive self-supervised learning to pre-train an encoder of a U-Net on a large variety of unlabeled fundus images from the EyePACS dataset. We demonstrate for the first time that the pre-trained network learns to recognize blood vessels, optic disc, fovea, and various lesions without being provided any labels. Furthermore, when fine-tuned on a downstream blood vessel segmentation task, such pre-trained networks achieve state-of-the-art domain transfer performance. The pre-training also leads to an improved few-shot performance and shorter training times on downstream tasks. Altogether, our results showcase the potential benefits of contrastive self-supervised pre-training for real-world clinical applications.