Timezone: »
The classification performance of deep neural networks has begun to asymptote at near-perfect levels on natural image benchmarks. However, their ability to generalize outside the training set and their robustness to adversarial attacks have not. Humans, by contrast, exhibit robust and graceful generalization far outside their set of training samples. In this talk, I will discuss one strategy for translating these properties to machine-learning classifiers: training them to be uncertain in the same way as humans, rather than always right. When we integrate human uncertainty into training paradigms by using human guess distributions as labels, we find the generalize better and are more robust to adversarial attacks. Rather than expect all image datasets to come with such labels, we instead intend our CIFAR10H dataset to be used as a gold standard, with which algorithmic means of capturing the same information can be evaluated. To illustrate this, I present one automated method that does so—deep prototype models inspired by the cognitive science literature.
Author Information
Ruairidh Battleday (Princeton University)
In my research, I study generalization: how our inference about the novel and unknown is guided by our evolved and encountered past. This entails studying and formalizing generalization and analogical learning in humans, and testing these ideas by using them to create better machine-learning algorithms. More broadly, I am interested in furthering our understanding of cognition and intelligence by uniting insights from high-level theories and ideologies of the brain, mind, and computation.
More from the Same Authors
-
2019 : Q&A from the Audience. Ask the Grad Students »
Erin Grant · Ruairidh Battleday · Sophia Sanborn · Nadine Chang · Nikhil Parthasarathy