Skip to yearly menu bar Skip to main content

Workshop: Advances in Programming Languages and Neurosymbolic Systems (AIPLANS)

AutoCoder: Leveraging Transformers for Automatic Code Synthesis

Mrinal Anand · Mayank Singh


Program synthesis from natural language descriptions is a challenging task. This paper explores two variants of transformer models for the task of program synthesis and showcase higher performance than the existing SOTA models. Through the end, we also discuss the differences in learned representation in these two variants. We demonstrate that the vanilla transformer model has a higher capacity to memorize the training data as compared to the other variant.