Skip to yearly menu bar Skip to main content


Poster

Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel Machines

Edward Milsom · Ben Anson · Laurence Aitchison

East Exhibit Hall A-C #3909
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Recent work developed convolutional deep kernel machines, achieving 92.7\% test accuracy on CIFAR-10 using a ResNet-inspired architecture, which is SOTA for kernel methods. However, this still lags behind neural networks, which easily achieve over 94\% test accuracy with similar architectures. In this work we introduce several modifications to improve the convolutional deep kernel machine's generalisation, including stochastic kernel regularisation, which adds noise to the learned Gram matrices during training. The resulting model achieves 94.5\% test accuracy on CIFAR-10. This finding has important theoretical and practical implications, as it demonstrates that the ability to perform well on complex tasks like image classification is not unique to neural networks. Instead, other approaches including deep kernel methods can achieve excellent performance on such tasks, as long as they have the capacity to learn representations from data.

Live content is unavailable. Log in and register to view live content