When convolutional layers apply no padding, central pixels have more ways to contribute to the convolution than peripheral pixels. Such discrepancy grows exponentially with the number of layers, leading to implicit foveation of the input pixels. We show that this discrepancy can persist even when padding is applied. In particular, with the commonly-used zero-padding, foveation effects are significantly reduced but not eliminated. We explore how different aspects of convolution arithmetic impact the spread and magnitude of foveation, and elaborate on which alternative padding techniques can mitigate it. Finally, we compare our findings with foveation in human vision, concluding that both effects are likely of the same nature and have similar implications.