Demonstrations must show novel technology and must run online during the conference. Unlike poster presentations or slide shows, interaction with the audience is a critical element. Therefore, the creativity of demonstrators to propose new ways in which interaction and engagement can fully leverage this year’s virtual conference format will be particularly relevant for selection. This session has the following demonstrations:
Wed 8:30 a.m. - 8:35 a.m.
|
Intro
(
Talk
)
|
Douwe Kiela 🔗 |
Wed 8:35 a.m. - 8:50 a.m.
|
Automated Evaluation of GNN Explanations with Neuro Symbolic Reasoning
(
Live Demo
)
link »
Automatically evaluating explanations of GNN model predictions can significantly increase the adoption of GNN models and explainability solutions in real world applications. In this demo, we present a method to use GNN predictions and symbolic reasoners, to automatically evaluate the explanations from GNN explainability techniques. |
Vanya BK · Muhammed Ameen · Balaji Ganesan · Devbrat Sharma · Arvind Agarwal 🔗 |
Wed 8:50 a.m. - 9:05 a.m.
|
AIMEE: Interactive model maintenance with rule-based surrogates
(
Live Demo
)
link »
In real-world applications, such as loan approvals or claims management, machine learning (ML) models need to be updated or retrained to adhere to new rules and regulations. But how can a new model be built and new decision boundaries be formed without having new training data available? We present AI Model Explorer and Editor tool (AIMEE) for model exploration and model editing using human understandable rules. It addresses the problem of changing decision boundaries by leveraging user-specified feedback rules that are used to pre-process training data such that a retrained model will reflect user changes. The pre-processing step using synthetic oversampling and relabeling and assumes white box access to the model. AIMEE provides interactive methods to edit rule sets, visualize changes to decision boundaries, and generates interpretable comparisons of model changes so that users see their feedback reflected in the updated model. The demo shows and end-to-end solution that supports the full update lifecycle of an ML model. |
Owen Cornec · Rahul Nair · Oznur Alkan · Dennis Wei · Elizabeth Daly 🔗 |
Wed 9:05 a.m. - 9:20 a.m.
|
AME: Interpretable Almost Exact Matching for Causal Inference
(
Live Demo
)
link »
AME (Almost Matching Exactly) is an interactive web-based application that allows users to perform matching for observational causal inference on large datasets. The AME application is powered by Fast Large-Scale Almost Matching Exactly (FLAME) (JMLR’21) and Dynamic Almost Matching Exactly (DAME) (AISTATS’19) algorithms that match treatment and control units in a way that is interpretable, because the matches are made directly on covariates, high-quality, because machine learning is used to determine which covariates are important to match on, and scalable, using techniques from data management. Our demonstration shows the usefulness of these algorithms and allows easy interactive explorations for treatment effect estimates and corresponding matched groups of units with a suite of visualizations providing detailed insights to users. |
Haoning Jiang · Thomas Howell · Neha Gupta · Vittorio Orlandi · Sudeepa Roy · Marco Morucci · Harsh Parikh · Alexander Volfovsky · Cynthia Rudin 🔗 |
Wed 9:20 a.m. - 9:35 a.m.
|
Exploring Conceptual Soundness with TruLens
(
Live Demo
)
link »
As machine learning has become increasingly ubiquitous, there has been a growing need to assess the trustworthiness of learned models. One important aspect to model trust is conceptual soundness, i.e., the extent to which a model uses features that are appropriate for its intended task. We present TruLens, a new cross-platform framework for explaining deep network behavior. In our demonstration, we provide an interactive application built on TruLens that we use to explore the conceptual soundness of various pre-trained models. Throughout the presentation, we take the unique perspective that robustness to small-norm adversarial examples is a necessary condition for conceptual soundness; we demonstrate this by comparing explanations on models trained with and without a robust objective. Our demonstration will focus on our end-to-end application, which will be made accessible for the audience to interact with; but we will also provide details on its open-source components, including the TruLens library and the code used to train robust networks. |
Anupam Datta · Matt Fredrikson · Klas Leino · Kaiji Lu · Shayak Sen · Ricardo C Shih · Zifan Wang 🔗 |
Wed 9:35 a.m. - 9:50 a.m.
|
An Interactive Tool for Computation with Assemblies of Neurons
(
Live Demo
)
link »
While Artificial Neural Networks have reached once unimaginable levels of performance, it is also broadly recognized that they lag far behind biological brains in important aspects (besides biological plausibility), such as interpretability and adaptability to new tasks as well as power and data usage. During the past two decades, the study of the animal brain has also progressed tremendously through powerful recording techniques and interpretation methodology --- in recent years assisted greatly by machine learning. However, this progress has not brought us closer to answering this field's own overarching interpretability question: how exactly does the activity of neurons and synapses result in high-level cognitive functions, especially in the human brain? The Assembly Calculus (AC) is a novel framework intended to bridge the gap between the level of neuron and synapses, and that of cognition. AC is a computational system entailing a basic data item called an assembly, a stable set of neurons explained below; a set of operations that create and manipulate assemblies; and an execution model squarely based on basic tenets of neuroscience. Importantly, it allows the creation of biologically plausible, flexible and interpretable programs, enabling one to develop tangible hypotheses on how specific brain functions may work. To facilitate such experimentation, we present here a tool which in real-time allows the simulation, modification and visualization of this computational system, including several prepared examples. Our tool is a web application which greatly aids in the creation and analysis of algorithms within the assembly calculus by allowing the user to both visualize neurons and their connections and also present a simple interface to dynamically modify and run code on assemblies. The interface can be accessed here: http://brain.cc.gatech.edu |
Seung Je Jung · Christos Papadimitriou · Santosh Vempala 🔗 |