Skip to yearly menu bar Skip to main content


Poster

[Re] Reproduction and Extension of "Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation"

Erica Eaton · Pirouz Naghavi

Hall J (level 1) #1003

Keywords: [ ReScience - MLRC 2021 ] [ Journal Track ]


Abstract:

Scope of Reproducibility: The main claims we are trying to reproduce are that bias controlled training or combining counterfactual data augmentation, the positively biased data collected by Dinan et al. [5], and bias controlled training for the LIGHT dataset yields generated dialogue in which the percent of gendered words and male bias closely match the ground truth. Methodology: We fine-tuned a transformer model, pre-trained on Reddit data [1], using the ParlAI API [8] with counterfactual data augmentation, positively biased data collection, bias controlled training, and all three bias mitigation techniques combined, as discussed in the original paper [5]. We implemented counterfactual data augmentation and bias controlled training ourselves. All models were trained and evaluated using a single NVIDIA Tesla P100 PCIe GPU, which took between 1.3 and 4.6 GPU hours approximately. Results: Overall, our results support the main claims of the original paper [5]. Although the percent gendered words and male bias in our results are not exactly the same as those in the original paper [5], the main trends are the same. The main difference is lower male bias for the baseline model in our results. However, our findings and the trend similarities between our results and those obtained by Dinan et al. [5] demonstrate that bias controlled training or combining all three bias mitigation techniques can effectively control the amount of gender bias present in the model generated responses, supporting Dinan et al.'s claims [5]. What was easy: When reproducing the original paper [5], implementing counterfactual data augmentation and bias controlled training was easy since these techniques were well-described in the original paper [5]. Also, combining all three bias mitigation techniques was simple, as we applied the same techniques used to implement each bias mitigation method individually. What was difficult: The only difficulty we encountered, albeit minor, was learning how to use ParlAI, which was necessary to use the same model as in the original paper [5]. However, after reading through the ParlAI documentation and experimenting with the ParlAI Google Colaboratory tutorial [10], we understood how to use ParlAI to fine-tune the model, pre-trained on Reddit conversations [1], for the datasets we create. Communication with original authors: We communicated with Emily Dinan, an author of the original paper [5], who clarified what model was used in the original paper [5] and provided us with the command to download the model as well as the hyperparameter settings used when fine-tuning.

Chat is not available.