Timezone: »

I Can’t Believe It’s Not Better: Understanding Deep Learning Through Empirical Falsification
Arno Blaas · Sahra Ghalebikesabi · Javier Antorán · Fan Feng · Melanie F. Pradier · Ian Mason · David Rohde

Sat Dec 03 06:15 AM -- 03:00 PM (PST) @ Ballroom C
Event URL: https://sites.google.com/view/icbinb-2022/ »

Deep learning has flourished in the last decade. Recent breakthroughs have shown stunning results, and yet, researchers still cannot fully explain why neural networks generalise so well or why some architectures or optimizers work better than others. There is a lack of understanding of existing deep learning systems, which led NeurIPS 2017 test of time award winners Rahimi & Recht to compare machine learning with alchemy and to call for the return of the 'rigour police'.

Despite excellent theoretical work in the field, deep neural networks are so complex that they might not be able to be fully comprehended with theory alone. Unfortunately, the experimental alternative - rigorous work that neither proves a theorem nor proposes a new method - is currently under-valued in the machine learning community.

To change this, this workshop aims to promote the method of empirical falsification.

We solicit contributions which explicitly formulate a hypothesis related to deep learning or its applications (based on first principles or prior work), and then empirically falsify it through experiments. We further encourage submissions to go a layer deeper and investigate the causes of an initial idea not working as expected. This workshop will showcase how negative results offer important learning opportunities for deep learning researchers, possibly far greater than the incremental improvements found in conventional machine learning papers!

Why empirical falsification? In the words of Karl Popper, "It is easy to obtain confirmations, or verifications, for nearly every theory—if we look for confirmations. Confirmations should count only if they are the result of risky predictions."
We believe that similarly to physics, which seeks to understand nature, the complexity of deep neural networks makes any understanding about them built inductively likely to be brittle.

The most reliable method with which physicists can probe nature is by experimentally validating (or not) the falsifiable predictions made by their existing theories. We posit the same could be the case for deep learning and believe that the task of understanding deep neural networks would benefit from adopting the approach of empirical falsification.

Author Information

Arno Blaas (Apple)
Sahra Ghalebikesabi (University of Oxford)
Sahra Ghalebikesabi

Sahra Ghalebikesabi is a fourth year PhD student at the University of Oxford, supervised by Chris Holmes. During her PhD, she interned at DeepMind London and Microsoft Research Cambridge. She is also a Microsoft Research PhD Fellow. Her research focusses on generative modelling for robustness, differential privacy and interpretability.

Javier Antorán (University of Cambridge)
Fan Feng (City University of Hong Kong)
Melanie F. Pradier (Microsoft Research)
Ian Mason (Massachusetts Institute of Technology)
David Rohde (Criteo)

More from the Same Authors