Timezone: »

Generative causal explanations of black-box classifiers
Matthew O'Shaughnessy · Gregory Canal · Marissa Connor · Christopher Rozell · Mark Davenport

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #808

We develop a method for generating causal post-hoc explanations of black-box classifiers based on a learned low-dimensional representation of the data. The explanation is causal in the sense that changing learned latent factors produces a change in the classifier output statistics. To construct these explanations, we design a learning framework that leverages a generative model and information-theoretic measures of causal influence. Our objective function encourages both the generative model to faithfully represent the data distribution and the latent factors to have a large causal influence on the classifier output. Our method learns both global and local explanations, is compatible with any classifier that admits class probabilities and a gradient, and does not require labeled attributes or knowledge of causal structure. Using carefully controlled test cases, we provide intuition that illuminates the function of our causal objective. We then demonstrate the practical utility of our method on image recognition tasks.

Author Information

Matthew O'Shaughnessy (Georgia Institute of Technology)

I am a 5th year PhD student at Georgia Tech planning on graduating in Fall 2021. My technical research interests are broadly in machine learning, causality, and low-dimensional structure. I'm also interested in public policy, AI policy, and science communication.

Gregory Canal (Georgia Institute of Technology)
Marissa Connor (Georgia Tech)
Christopher Rozell (Georgia Institute of Technology)
Mark Davenport (Georgia Institute of Technology)

More from the Same Authors