Skip to yearly menu bar Skip to main content


Workshop

I Can’t Believe It’s Not Better: Understanding Deep Learning Through Empirical Falsification

Arno Blaas · Sahra Ghalebikesabi · Javier Antorán · Fan Feng · Melanie F. Pradier · Ian Mason · David Rohde

La Nouvelle Orleans Ballroom C (level 2)

Deep learning has flourished in the last decade. Recent breakthroughs have shown stunning results, and yet, researchers still cannot fully explain why neural networks generalise so well or why some architectures or optimizers work better than others. There is a lack of understanding of existing deep learning systems, which led NeurIPS 2017 test of time award winners Rahimi & Recht to compare machine learning with alchemy and to call for the return of the 'rigour police'.

Despite excellent theoretical work in the field, deep neural networks are so complex that they might not be able to be fully comprehended with theory alone. Unfortunately, the experimental alternative - rigorous work that neither proves a theorem nor proposes a new method - is currently under-valued in the machine learning community.

To change this, this workshop aims to promote the method of empirical falsification.

We solicit contributions which explicitly formulate a hypothesis related to deep learning or its applications (based on first principles or prior work), and then empirically falsify it through experiments. We further encourage submissions to go a layer deeper and investigate the causes of an initial idea not working as expected. This workshop will showcase how negative results offer important learning opportunities for deep learning researchers, possibly far greater than the incremental improvements found in conventional machine learning papers!

Why empirical falsification? In the words of Karl Popper, "It is easy to obtain confirmations, or verifications, for nearly every theory—if we look for confirmations. Confirmations should count only if they are the result of risky predictions."
We believe that similarly to physics, which seeks to understand nature, the complexity of deep neural networks makes any understanding about them built inductively likely to be brittle.

The most reliable method with which physicists can probe nature is by experimentally validating (or not) the falsifiable predictions made by their existing theories. We posit the same could be the case for deep learning and believe that the task of understanding deep neural networks would benefit from adopting the approach of empirical falsification.

Chat is not available.
Timezone: America/Los_Angeles

Schedule