Timezone: »

Investigating Gender Bias in Language Models Using Causal Mediation Analysis
Jesse Vig · Sebastian Gehrmann · Yonatan Belinkov · Sharon Qian · Daniel Nevo · Yaron Singer · Stuart Shieber

Mon Dec 07 07:10 PM -- 07:20 PM (PST) @ Orals & Spotlights: Language/Audio Applications

Many interpretation methods for neural models in natural language processing investigate how information is encoded inside hidden representations. However, these methods can only measure whether the information exists, not whether it is actually used by the model. We propose a methodology grounded in the theory of causal mediation analysis for interpreting which parts of a model are causally implicated in its behavior. The approach enables us to analyze the mechanisms that facilitate the flow of information from input to output through various model components, known as mediators. As a case study, we apply this methodology to analyzing gender bias in pre-trained Transformer language models. We study the role of individual neurons and attention heads in mediating gender bias across three datasets designed to gauge a model's sensitivity to gender bias. Our mediation analysis reveals that gender bias effects are concentrated in specific components of the model that may exhibit highly specialized behavior.

Author Information

Jesse Vig (Salesforce Research)
Sebastian Gehrmann (Harvard University)
Yonatan Belinkov (Technion)
Sharon Qian (Harvard)
Daniel Nevo (Tel Aviv University)
Yaron Singer (Harvard University)
Stuart Shieber (Harvard University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors