Skip to yearly menu bar Skip to main content


Uncovering and Quantifying Social Biases in Code Generation

Yan Liu · Xiaokang Chen · Yan Gao · Zhe Su · Fengji Zhang · Daoguang Zan · Jian-Guang Lou · Pin-Yu Chen · Tsung-Yi Ho

Great Hall & Hall B1+B2 (level 1) #1511
[ ]
Tue 12 Dec 3:15 p.m. PST — 5:15 p.m. PST


With the popularity of automatic code generation tools, such as Copilot, the study of the potential hazards of these tools is gaining importance. In this work, we explore the social bias problem in pre-trained code generation models. We propose a new paradigm to construct code prompts and successfully uncover social biases in code generation models. To quantify the severity of social biases in generated code, we develop a dataset along with three metrics to evaluate the overall social bias and fine-grained unfairness across different demographics. Experimental results on three pre-trained code generation models (Codex, InCoder, and CodeGen) with varying sizes, reveal severe social biases. Moreover, we conduct analysis to provide useful insights for further choice of code generation models with low social bias.

Chat is not available.