Timezone: »

 
Generation Probabilities are Not Enough: Improving Error Highlighting for AI Code Suggestions
Helena Vasconcelos · Gagan Bansal · Adam Fourney · Q.Vera Liao · Jennifer Wortman Vaughan
Event URL: https://openreview.net/forum?id=wngXcmrjVyD »

Large-scale generative models are increasingly being used in tooling applications. As one prominent example, code generation models recommend code completions within an IDE to help programmers author software. However, since these models are imperfect, their erroneous recommendations can introduce bugs or even security vulnerabilities into a code base if not overridden by a human user. In order to override such errors, users must first detect them. One method of assisting this detection has been highlighting tokens with low generation probabilities. We also propose another method, predicting the tokens people are likely to edit in a generation. Through a mixed-methods, pre-registered study with N = 30 participants, we find that the edit model highlighting strategy results in significantly faster task completion time, significantly more localized edits, and was strongly preferred by participants.

Author Information

Helena Vasconcelos (Stanford University)
Gagan Bansal
Adam Fourney (Microsoft Research)
Adam Fourney

I am a computer scientist in the Human-AI eXperiences (HAX) research group at Microsoft Research in Redmond. Before joining Microsoft, I studied at the University of Waterloo, where I earned my Master's and Doctorate degrees.

Q.Vera Liao (Microsoft)
Jennifer Wortman Vaughan (Microsoft Research)
Jennifer Wortman Vaughan

Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City. Her research background is in machine learning and algorithmic economics. She is especially interested in the interaction between people and AI, and has often studied this interaction in the context of prediction markets and other crowdsourcing systems. In recent years, she has turned her attention to human-centered approaches to transparency, interpretability, and fairness in machine learning as part of MSR's FATE group and co-chair of Microsoft’s Aether Working Group on Transparency. Jenn came to MSR in 2012 from UCLA, where she was an assistant professor in the computer science department. She completed her Ph.D. at the University of Pennsylvania in 2009, and subsequently spent a year as a Computing Innovation Fellow at Harvard. She is the recipient of Penn's 2009 Rubinoff dissertation award for innovative applications of computer technology, a National Science Foundation CAREER award, a Presidential Early Career Award for Scientists and Engineers (PECASE), and a handful of best paper awards. In her "spare" time, Jenn is involved in a variety of efforts to provide support for women in computer science; most notably, she co-founded the Annual Workshop for Women in Machine Learning, which has been held each year since 2006.

More from the Same Authors