Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Black in AI

Detecting gender bias in pre-trained language models for zero-shot text classification

Nile Dixon

Keywords: [ Natural Language Processing ] [ ethics ]


Abstract:

Due to the limited availability for any researchers are interested in categorizing textdata using unseen labels. This became more feasible with the advent of Transformermodels. Many pre-trained models have been fine-tuned on entailment sentencepairs to perform dataless text classification with much success. However, otherresearchers have discovered that these large language models contain gender andracial biases that can negatively perpetuate negative stereotypes. While manyresearchers have explored the prevalence of gender bias in pre-trained word andsentence embeddings, there hasn’t been much research done in measuring andmitigating gender bias in zero-shot text classification. In this research, I propose amethod for evaluating gender bias in zero-shot text classification models and applythis technique on BART.

Chat is not available.