Timezone: »

 
Poster
Human-Guided Complexity-Controlled Abstractions
Andi Peng · Mycal Tucker · Eoin Kenny · Noga Zaslavsky · Pulkit Agrawal · Julie A Shah

Thu Dec 14 03:00 PM -- 05:00 PM (PST) @ Great Hall & Hall B1+B2 #901

Neural networks often learn task-specific latent representations that fail to generalize to novel settings or tasks. Conversely, humans learn discrete representations (i.e., concepts or words) at a variety of abstraction levels (e.g., "bird" vs. "sparrow'") and use the appropriate abstraction based on tasks. Inspired by this, we train neural models to generate a spectrum of discrete representations, and control the complexity of the representations (roughly, how many bits are allocated for encoding inputs) by tuning the entropy of the distribution over representations. In finetuning experiments, using only a small number of labeled examples for a new task, we show that (1) tuning the representation to a task-appropriate complexity level supports the greatest finetuning performance, and (2) in a human-participant study, users were able to identify the appropriate complexity level for a downstream task via visualizations of discrete representations. Our results indicate a promising direction for rapid model finetuning by leveraging human insight.

Author Information

Andi Peng (MIT)
Mycal Tucker (Massachusetts Institute of Technology)
Eoin Kenny (MIT)
Eoin Kenny

I am an explainable AI reseracher (XAI). Previously, I did my Ph.D. at University College Dublin, Ireland. There I worked on post-hoc explanation-by-example with my supervisor Mark Keane. Currently I am researching XAI at MIT alongside Julie Shah, with a focus on contrastive explanation and interpretable deep reinforcement learning. I envision AI systems which can be successfully deployed with useful, human-friendly explanations, so that we can clearly see what they are doing in a way that everyone can understand (not just ML experts). To do this, I use example-based XAI, because it is similar to how humans are thought to reason and has much support in user testing that it is useful and understandable to people. My strongest contributions to the field have been (1) the introduction of Semi-Factual explanation, and (2) designing the first interpretable Deep RL system.

Noga Zaslavsky (UCI)
Pulkit Agrawal (MIT)
Julie A Shah (MIT)

More from the Same Authors