`

Timezone: »

 
The Intrinsic Dimension of Images and Its Impact on Learning
Chen Zhu · Micah Goldblum · Ahmed Abdelkader · Tom Goldstein · Phillip Pope

Fri Dec 11 12:00 PM -- 01:00 PM (PST) @ None

It is widely believed that natural image data exhibits low-dimensional structure despite being embedded in a high-dimensional pixel space. This idea underlies a common intuition for the success of deep learning and has been exploited for enhanced regularization and adversarial robustness. In this work, we apply dimension estimation tools to popular datasets and investigate the role of low dimensional structure in neural network learning. We find that common natural image datasets indeed have very low intrinsic dimension relative to the high number of pixels in the images. Additionally, we find that low dimensional datasets are easier for neural networks to learn. We validate our findings by carefully-designed experiments to vary the intrinsic dimension of both synthetic and real data and evaluate its impact on sample complexity.

Author Information

Chen Zhu (University of Maryland)
Micah Goldblum (UMD)
Ahmed Abdelkader (University of Maryland, College Park)
Tom Goldstein (University of Maryland)
Phillip Pope (University of Maryland, College Park)

More from the Same Authors