Timezone: »
Poster
Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction
Been Kim · Julie A Shah · Finale Doshi-Velez
We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared to related approaches. We perform a user study with domain experts to show the MGM's ability to help with dataset exploration.
Author Information
Been Kim (Allen Institute of Artificial Intelligence)
Julie A Shah (MIT)
Finale Doshi-Velez (Harvard)
More from the Same Authors
-
2021 Spotlight: Learning MDPs from Features: Predict-Then-Optimize for Sequential Decision Making by Reinforcement Learning »
Kai Wang · Sanket Shah · Haipeng Chen · Andrew Perrault · Finale Doshi-Velez · Milind Tambe -
2021 : Identification of Subgroups With Similar Benefits in Off-Policy Policy Evaluation »
Ramtin Keramati · Omer Gottesman · Leo Celi · Finale Doshi-Velez · Emma Brunskill -
2021 : Advanced Methods for Connectome-Based Predictive Modeling of Human Intelligence: A Novel Approach Based on Individual Differences in Cortical Topography »
Evan Anderson · Anuj Nayak · Pablo Robles-Granda · Lav Varshney · Been Kim · Aron K Barbey -
2022 : An Empirical Analysis of the Advantages of Finite vs.~Infinite Width Bayesian Neural Networks »
Jiayu Yao · Yaniv Yacoby · Beau Coker · Weiwei Pan · Finale Doshi-Velez -
2022 : Trading off Utility, Informativeness, and Complexity in Emergent Communication »
Mycal Tucker · Julie A Shah · Roger Levy · Noga Zaslavsky -
2022 : Feature-Level Synthesis of Human and ML Insights »
Isaac Lage · Sonali Parbhoo · Finale Doshi-Velez -
2022 : What Makes a Good Explanation?: A Unified View of Properties of Interpretable ML »
Varshini Subhash · Zixi Chen · Marton Havasi · Weiwei Pan · Finale Doshi-Velez -
2022 : What Makes a Good Explanation?: A Unified View of Properties of Interpretable ML »
Zixi Chen · Varshini Subhash · Marton Havasi · Weiwei Pan · Finale Doshi-Velez -
2022 : Concept-based Understanding of Emergent Multi-Agent Behavior »
Niko Grupen · Shayegan Omidshafiei · Natasha Jaques · Been Kim -
2022 : (When) Are Contrastive Explanations of Reinforcement Learning Helpful? »
Sanjana Narayanan · Isaac Lage · Finale Doshi-Velez -
2022 : Leveraging Human Features at Test-Time »
Isaac Lage · Sonali Parbhoo · Finale Doshi-Velez -
2022 : Fast Adaptation via Human Diagnosis of Task Distribution Shift »
Andi Peng · Mark Ho · Aviv Netanyahu · Julie A Shah · Pulkit Agrawal -
2022 : Temporal Logic Imitation: Learning Plan-Satisficing Motion Policies from Demonstrations »
Felix Yanwei Wang · Nadia Figueroa · Shen Li · Ankit Shah · Julie A Shah -
2022 : An Empirical Analysis of the Advantages of Finite v.s. Infinite Width Bayesian Neural Networks »
Jiayu Yao · Yaniv Yacoby · Beau Coker · Weiwei Pan · Finale Doshi-Velez -
2022 : Aligning Robot Representations with Humans »
Andreea Bobu · Andi Peng · Pulkit Agrawal · Julie A Shah · Anca Dragan -
2022 : Generalization and Translatability in Emergent Communication via Informational Constraints »
Mycal Tucker · Roger Levy · Julie A Shah · Noga Zaslavsky -
2023 Poster: State2Explanation: Concept-Based Explanations to Benefit Agent Learning and User Understanding »
Devleena Das · Sonia Chernova · Been Kim -
2023 Poster: Human-Guided Complexity-Controlled Abstractions »
Andi Peng · Mycal Tucker · Eoin Kenny · Noga Zaslavsky · Pulkit Agrawal · Julie A Shah -
2023 Poster: Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models »
Peter Hase · Mohit Bansal · Been Kim · Asma Ghandeharioun -
2023 Poster: Gaussian Process Probes (GPP) for Uncertainty-Aware Probing »
Alexander Ku · Zi Wang · Jason Baldridge · Tom Griffiths · Been Kim -
2022 : Panel: Explainability/Predictability Robotics (Q&A 4) »
Katherine Driggs-Campbell · Been Kim · Leila Takayama -
2022 : Panel Discussion »
Kamalika Chaudhuri · Been Kim · Dorsa Sadigh · Huan Zhang · Linyi Li -
2022 : Invited Talk: Been Kim »
Been Kim -
2022 : Generalization and Translatability in Emergent Communication via Informational Constraints »
Mycal Tucker · Roger Levy · Julie A Shah · Noga Zaslavsky -
2022 : What Makes a Good Explanation?: A Unified View of Properties of Interpretable ML »
Varshini Subhash · Zixi Chen · Marton Havasi · Weiwei Pan · Finale Doshi-Velez -
2022 Poster: Addressing Leakage in Concept Bottleneck Models »
Marton Havasi · Sonali Parbhoo · Finale Doshi-Velez -
2022 Poster: Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare »
Shengpu Tang · Maggie Makar · Michael Sjoding · Finale Doshi-Velez · Jenna Wiens -
2022 Poster: Beyond Rewards: a Hierarchical Perspective on Offline Multiagent Behavioral Analysis »
Shayegan Omidshafiei · Andrei Kapishnikov · Yannick Assogba · Lucas Dixon · Been Kim -
2021 : Retrospective Panel »
Sergey Levine · Nando de Freitas · Emma Brunskill · Finale Doshi-Velez · Nan Jiang · Rishabh Agarwal -
2021 : [O5] Do Feature Attribution Methods Correctly Attribute Features? »
Yilun Zhou · Serena Booth · Marco Tulio Ribeiro · Julie A Shah -
2021 : LAF | Panel discussion »
Aaron Snoswell · Jake Goldenfein · Finale Doshi-Velez · Evi Micha · Ivana Dusparic · Jonathan Stray -
2021 : LAF | The Role of Explanation in RL Legitimacy, Accountability, and Feedback »
Finale Doshi-Velez -
2021 : Invited talk #2: Finale Doshi-Velez »
Finale Doshi-Velez -
2021 Poster: Emergent Discrete Communication in Semantic Spaces »
Mycal Tucker · Huao Li · Siddharth Agrawal · Dana Hughes · Katia Sycara · Michael Lewis · Julie A Shah -
2021 Poster: Learning MDPs from Features: Predict-Then-Optimize for Sequential Decision Making by Reinforcement Learning »
Kai Wang · Sanket Shah · Haipeng Chen · Andrew Perrault · Finale Doshi-Velez · Milind Tambe -
2020 : Batch RL Models Built for Validation »
Finale Doshi-Velez -
2020 : Panel »
Emma Brunskill · Nan Jiang · Nando de Freitas · Finale Doshi-Velez · Sergey Levine · John Langford · Lihong Li · George Tucker · Rishabh Agarwal · Aviral Kumar -
2020 : Q & A and Panel Session with Tom Mitchell, Jenn Wortman Vaughan, Sanjoy Dasgupta, and Finale Doshi-Velez »
Tom Mitchell · Jennifer Wortman Vaughan · Sanjoy Dasgupta · Finale Doshi-Velez · Zachary Lipton -
2020 Workshop: I Can’t Believe It’s Not Better! Bridging the gap between theory and empiricism in probabilistic machine learning »
Jessica Forde · Francisco Ruiz · Melanie Fernandez Pradier · Aaron Schein · Finale Doshi-Velez · Isabel Valera · David Blei · Hanna Wallach -
2020 Poster: Incorporating Interpretable Output Constraints in Bayesian Neural Networks »
Wanqian Yang · Lars Lorch · Moritz Graule · Himabindu Lakkaraju · Finale Doshi-Velez -
2020 Spotlight: Incorporating Interpretable Output Constraints in Bayesian Neural Networks »
Wanqian Yang · Lars Lorch · Moritz Graule · Himabindu Lakkaraju · Finale Doshi-Velez -
2020 Poster: Model-based Reinforcement Learning for Semi-Markov Decision Processes with Neural ODEs »
Jianzhun Du · Joseph Futoma · Finale Doshi-Velez -
2020 : Discussion Panel: Hugo Larochelle, Finale Doshi-Velez, Devi Parikh, Marc Deisenroth, Julien Mairal, Katja Hofmann, Phillip Isola, and Michael Bowling »
Hugo Larochelle · Finale Doshi-Velez · Marc Deisenroth · Devi Parikh · Julien Mairal · Katja Hofmann · Phillip Isola · Michael Bowling -
2019 : Panel - The Role of Communication at Large: Aparna Lakshmiratan, Jason Yosinski, Been Kim, Surya Ganguli, Finale Doshi-Velez »
Aparna Lakshmiratan · Finale Doshi-Velez · Surya Ganguli · Zachary Lipton · Michela Paganini · Anima Anandkumar · Jason Yosinski -
2019 : Invited talk #5 »
Been Kim -
2019 : Invited talk #4 »
Finale Doshi-Velez -
2019 : Responsibilities »
Been Kim · Liz O'Sullivan · Friederike Schuur · Andrew Smart · Jacob Metcalf -
2019 : Finale Doshi-Velez: Combining Statistical methods with Human Input for Evaluation and Optimization in Batch Settings »
Finale Doshi-Velez -
2018 : Finale Doshi-Velez »
Finale Doshi-Velez -
2018 : Panel on research process »
Zachary Lipton · Charles Sutton · Finale Doshi-Velez · Hanna Wallach · Suchi Saria · Rich Caruana · Thomas Rainforth -
2018 : Finale Doshi-Velez »
Finale Doshi-Velez -
2018 Poster: Human-in-the-Loop Interpretability Prior »
Isaac Lage · Andrew Ross · Samuel J Gershman · Been Kim · Finale Doshi-Velez -
2018 Spotlight: Human-in-the-Loop Interpretability Prior »
Isaac Lage · Andrew Ross · Samuel J Gershman · Been Kim · Finale Doshi-Velez -
2018 Poster: Representation Balancing MDPs for Off-policy Policy Evaluation »
Yao Liu · Omer Gottesman · Aniruddh Raghu · Matthieu Komorowski · Aldo Faisal · Finale Doshi-Velez · Emma Brunskill -
2018 Poster: Bayesian Inference of Temporal Task Specifications from Demonstrations »
Ankit Shah · Pritish Kamath · Julie A Shah · Shen Li -
2017 : Panel Session »
Neil Lawrence · Finale Doshi-Velez · Zoubin Ghahramani · Yann LeCun · Max Welling · Yee Whye Teh · Ole Winther -
2017 : Finale Doshi-Velez »
Finale Doshi-Velez -
2017 : Invited Talk 1 »
Been Kim -
2017 : Automatic Model Selection in BNNs with Horseshoe Priors »
Finale Doshi-Velez -
2017 : Coffee break and Poster Session I »
Nishith Khandwala · Steve Gallant · Gregory Way · Aniruddh Raghu · Li Shen · Aydan Gasimova · Alican Bozkurt · William Boag · Daniel Lopez-Martinez · Ulrich Bodenhofer · Samaneh Nasiri GhoshehBolagh · Michelle Guo · Christoph Kurz · Kirubin Pillay · Kimis Perros · George H Chen · Alexandre Yahi · Madhumita Sushil · Sanjay Purushotham · Elena Tutubalina · Tejpal Virdi · Marc-Andre Schulz · Samuel Weisenthal · Bharat Srikishan · Petar Veličković · Kartik Ahuja · Andrew Miller · Erin Craig · Disi Ji · Filip Dabek · Chloé Pou-Prom · Hejia Zhang · Janani Kalyanam · Wei-Hung Weng · Harish Bhat · Hugh Chen · Simon Kohl · Mingwu Gao · Tingting Zhu · Ming-Zher Poh · Iñigo Urteaga · Antoine Honoré · Alessandro De Palma · Maruan Al-Shedivat · Pranav Rajpurkar · Matthew McDermott · Vincent Chen · Yanan Sui · Yun-Geun Lee · Li-Fang Cheng · Chen Fang · Sibt ul Hussain · Cesare Furlanello · Zeev Waks · Hiba Chougrad · Hedvig Kjellstrom · Finale Doshi-Velez · Wolfgang Fruehwirt · Yanqing Zhang · Lily Hu · Junfang Chen · Sunho Park · Gatis Mikelsons · Jumana Dakka · Stephanie Hyland · yann chevaleyre · Hyunwoo Lee · Xavier Giro-i-Nieto · David Kale · Michael Hughes · Gabriel Erion · Rishab Mehra · William Zame · Stojan Trajanovski · Prithwish Chakraborty · Kelly Peterson · Muktabh Mayank Srivastava · Amy Jin · Heliodoro Tejeda Lemus · Priyadip Ray · Tamas Madl · Joseph Futoma · Enhao Gong · Syed Rameel Ahmad · Eric Lei · Ferdinand Legros -
2017 : Contributed talk: Beyond Sparsity: Tree-based Regularization of Deep Models for Interpretability »
Mike Wu · Sonali Parbhoo · Finale Doshi-Velez -
2017 : Invited talk: The Role of Explanation in Holding AIs Accountable »
Finale Doshi-Velez -
2017 Poster: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2017 Oral: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2016 : BNNs for RL: A Success Story and Open Questions »
Finale Doshi-Velez -
2016 Workshop: The Future of Interactive Machine Learning »
Kory Mathewson @korymath · Kaushik Subramanian · Mark Ho · Robert Loftin · Joseph L Austerweil · Anna Harutyunyan · Doina Precup · Layla El Asri · Matthew Gombolay · Jerry Zhu · Sonia Chernova · Charles Isbell · Patrick M Pilarski · Weng-Keen Wong · Manuela Veloso · Julie A Shah · Matthew Taylor · Brenna Argall · Michael Littman -
2016 Workshop: Interpretable Machine Learning for Complex Systems »
Andrew Wilson · Been Kim · William Herlands -
2016 Oral: Examples are not enough, learn to criticize! Criticism for Interpretability »
Been Kim · Sanmi Koyejo · Rajiv Khanna -
2016 Poster: Examples are not enough, learn to criticize! Criticism for Interpretability »
Been Kim · Sanmi Koyejo · Rajiv Khanna -
2015 Workshop: Machine Learning From and For Adaptive User Technologies: From Active Learning & Experimentation to Optimization & Personalization »
Joseph Jay Williams · Yasin Abbasi Yadkori · Finale Doshi-Velez -
2015 : Data Driven Phenotyping for Diseases »
Finale Doshi-Velez -
2014 Poster: Fairness in Multi-Agent Sequential Decision-Making »
Chongjie Zhang · Julie A Shah -
2014 Poster: The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification »
Been Kim · Cynthia Rudin · Julie A Shah