Timezone: »
Parsing sentences into syntax trees can benefit downstream applications in NLP. Transition-based parsers build trees by executing actions in a state transition system. They are computationally efficient, and can leverage machine learning to predict actions based on partial trees. However, existing transition-based parsers are predominantly based on the shift-reduce transition system, which does not align with how humans are known to parse sentences. Psycholinguistic research suggests that human parsing is strongly incremental—humans grow a single parse tree by adding exactly one token at each step. In this paper, we propose a novel transition system called attach-juxtapose. It is strongly incremental; it represents a partial sentence using a single tree; each action adds exactly one token into the partial tree. Based on our transition system, we develop a strongly incremental parser. At each step, it encodes the partial tree using a graph neural network and predicts an action. We evaluate our parser on Penn Treebank (PTB) and Chinese Treebank (CTB). On PTB, it outperforms existing parsers trained with only constituency trees; and it performs on par with state-of-the-art parsers that use dependency trees as additional training data. On CTB, our parser establishes a new state of the art. Code is available at https://github.com/princeton-vl/attach-juxtapose-parser.
Author Information
Kaiyu Yang (Princeton University)
I am a Ph.D. candidate in the Department of Computer Science at Princeton University, where I work with Prof. Jia Deng in Princeton Vision & Learning Lab. I also collaborate closely with Prof. Olga Russakovsky. My research focuses on bridging deep learning and symbolic reasoning, with applications in automated theorem proving and mathematical reasoning in natural languages. Prior to that, I worked in computer vision, including topics such as human poses, visual relationships, and fairness. I received my master’s degree from the University of Michigan and my bachelor’s degree from Tsinghua University.
Jia Deng (Princeton University)
More from the Same Authors
-
2021 : Fairness and privacy aspects of ImageNet »
Olga Russakovsky · Kaiyu Yang -
2021 Oral: DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras »
Zachary Teed · Jia Deng -
2021 Poster: DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras »
Zachary Teed · Jia Deng -
2020 Poster: Learning to Prove Theorems by Learning to Generate Theorems »
Mingzhe Wang · Jia Deng -
2020 Poster: Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D »
Ankit Goyal · Kaiyu Yang · Dawei Yang · Jia Deng -
2020 Spotlight: Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D »
Ankit Goyal · Kaiyu Yang · Dawei Yang · Jia Deng -
2011 Poster: Fast and Balanced: Efficient Label Tree Learning for Large Scale Object Recognition »
Jia Deng · Sanjeev Satheesh · Alexander C Berg · Li Fei-Fei