Timezone: »
Neural networks, which have had a profound effect on how researchers , do so through a complex, nonlinear mathematical structure which can be difficult to interpret or understand. This is especially true for recurrent models, as their dynamic structure can be difficult to measure and analyze. However, interpretability is a key factor in understanding certain problems such as text and language analysis. In this paper, we present a novel introspection method for LSTMs trained to solve complex language problems, such as sentiment analysis. Inspired by Information Bottleneck theory, our method uses a state-of-the-art information theoretic framework to visualize shared information around labels, features, and between layers. We apply our approach on simulated data, and real sentiment analysis datasets, providing novel, information-theoretic insights into internal model dynamics.
Author Information
Bradley Baker (Georgia Institute of Technology)
Noah Lewis
Debbrata Kumar Saha (Georgia Institute of Technology)
Md Abdur Rahaman (Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia Institute of Technology, Georgia State University, Emory University)
Sergey Plis (TReNDS center, GSU)
Vince Calhoun (Georgia State University)
More from the Same Authors
-
2021 : Single-Shot Pruning for Offline Reinforcement Learning »
Samin Yeasar Arnob · Riyasat Ohib · Sergey Plis · Doina Precup -
2022 : Reducing Causal Illusions through Deliberate Undersampling »
Kseniya Solovyeva · David Danks · Mohammadsajad Abavisani · Sergey Plis -
2022 : GRACE-C: Generalized Rate Agnostic Causal Estimation via Constraints »
Mohammadsajad Abavisani · David Danks · Vince Calhoun · Sergey Plis -
2022 : CommsVAE: Learning the brain's macroscale communication dynamics using coupled sequential VAEs »
Eloy Geenjaar · Noah Lewis · Amrit Kashyap · Robyn Miller · Vince Calhoun -
2022 : CommsVAE: Learning the brain's macroscale communication dynamics using coupled sequential VAEs »
Eloy Geenjaar · Noah Lewis · Amrit Kashyap · Robyn Miller · Vince Calhoun -
2023 : DynaLay: An Introspective Approach to Dynamic Layer Selection for Deep Networks »
Mrinal Mathur · Sergey Plis -
2023 : Uncovering the latent dynamics of whole-brain fMRI tasks with a sequential variational autoencoder »
Eloy Geenjaar · Donghyun Kim · Riyasat Ohib · Marlena Duda · Amrit Kashyap · Sergey Plis · Vince Calhoun -
2023 : Uncovering the latent dynamics of whole-brain fMRI tasks with a sequential variational autoencoder »
Eloy Geenjaar · Donghyun Kim · Riyasat Ohib · Marlena Duda · Amrit Kashyap · Sergey Plis · Vince Calhoun -
2023 : Decentralized Sparse Federated Learning for Efficient Training on Distributed NeuroImaging Data »
Bishal Thapaliya · Riyasat Ohib · Eloy Geenjaar · Jingyu Liu · Vince Calhoun · Sergey Plis -
2023 : Aberrant High-Order Dependencies in Schizophrenia Resting-State Functional MRI Networks »
Qiang Li · Vince Calhoun · Adithya Ram Ballem · Shujian Yu · Jesús Malo · Armin Iraji -
2022 : GRACE-C: Generalized Rate Agnostic Causal Estimation via Constraints »
Mohammadsajad Abavisani · David Danks · Vince Calhoun · Sergey Plis -
2017 : Competition II: Learning to Run »
Łukasz Kidziński · Carmichael Ong · Sharada Mohanty · Jason Fries · Jennifer Hicks · Zhuobin Zheng · Chun Yuan · Sergey Plis -
2015 Poster: Rate-Agnostic (Causal) Structure Learning »
Sergey Plis · David Danks · Cynthia Freeman · Vince Calhoun