Timezone: »
Most recent progress in natural language understanding (NLU) has been driven, in part, by benchmarks such as GLUE, SuperGLUE, SQuAD, etc. In fact, many NLU models have now matched or exceeded "human-level" performance on many tasks in these benchmarks. Most of these benchmarks, however, give models access to relatively large amounts of labeled data for training. As such, the models are provided far more data than required by humans to achieve strong performance. That has motivated a line of work that focuses on improving few-shot learning performance of NLU models. However, there is a lack of standardized evaluation benchmarks for few-shot NLU resulting in different experimental settings in different papers.To help accelerate this line of work, we introduce CLUES, a benchmark for evaluating the few-shot learning capabilities of NLU models. We demonstrate that while recent models reach human performance when they have access to large amounts of labeled data, there is a huge gap in performance in the few-shot setting for most tasks. We also demonstrate differences between alternative model families and adaptation techniques in the few shot setting. Finally, we discuss several principles and choices in designing the experimental settings for evaluating the true few-shot learning performance and suggest a unified standardized approach to few-shot learning evaluation. We aim to encourage research on NLU models that can generalize to new tasks with a small number of examples. Code and data for CLUES are available at https://github.com/microsoft/CLUES.
Author Information
Subhabrata Mukherjee (Microsoft Research)
I am a senior scientist at Microsoft Research (MSR) working at the intersection of natural language understanding, deep learning and transfer learning. My current research is focused on making AI accessible to all with two major themes: (1) Scaling deep and large-scale natural language understanding models to scenarios with limited computational resources leveraging techniques like self-supervised, weakly supervised and curriculum learning, data augmentation, knowledge distillation, etc. (2) Building trustworthy AI for mitigating misinformation and bias to provide fair and equitable information access to all. Prior to joining MSR, I was leading the information extraction efforts to build the Amazon Product Knowledge Graph, an authoritative knowledge graph for all products in the world. I graduated summa cum laude from the Max Planck Institute for Informatics, Germany with a PhD in 2017. I was awarded the 2018 SIGKDD Doctoral Dissertation Runner-up Award for my thesis on credibility analysis and misinformation.
Xiaodong Liu (Microsoft)
Guoqing Zheng (Carnegie Mellon University)
Saghar Hosseini (Microsoft Research)
Hao Cheng (Microsoft)
Ge Yang (Microsoft Research)
Christopher Meek (Microsoft Research)
Ahmed Awadallah (MICROSOFT RESEARCH)
I am passionate about using AI and Machine Learning to create intelligent user experiences that connect people to information. I lead a research and incubation team in Microsoft Research Technologies. Our work at the Language and Information Technologies team is focused on creating language understanding and user modeling technologies to enable intelligent experiences in multiple products. Our work has been shipped in several products such as Bing, Cortana, Office 365, and Dynamics 365. I have hands-on experience building and shipping state-of-the-art ML/AI algorithms. I also have experience building and managing world-class teams of scientists and engineers. My research interests are at the intersection of machine learning, language understanding, and information retrieval. A key part of my work involves using Machine Learning to model large-scale text and user behavior data with applications to intelligent assistants, search, user modeling, quality evaluation, recommendation and personalization. I received my Ph.D. from the department of Computer Science and Engineering at the University of Michigan Ann Arbor. I Invented, published, and patented new approaches in language understanding, information retrieval and machine learning. I published 60+ peer-reviewed papers in these areas and I am an inventor on 20+ (granted and pending) patents.
Jianfeng Gao (Microsoft Research, Redmond, WA)
More from the Same Authors
-
2021 Spotlight: Focal Attention for Long-Range Interactions in Vision Transformers »
Jianwei Yang · Chunyuan Li · Pengchuan Zhang · Xiyang Dai · Bin Xiao · Lu Yuan · Jianfeng Gao -
2021 : Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models »
Boxin Wang · Chejian Xu · Shuohang Wang · Zhe Gan · Yu Cheng · Jianfeng Gao · Ahmed Awadallah · Bo Li -
2021 : Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models »
Boxin Wang · Chejian Xu · Shuohang Wang · Zhe Gan · Yu Cheng · Jianfeng Gao · Ahmed Awadallah · Bo Li -
2021 Poster: Fairness via Representation Neutralization »
Mengnan Du · Subhabrata Mukherjee · Guanchu Wang · Ruixiang Tang · Ahmed Awadallah · Xia Hu -
2021 Poster: Focal Attention for Long-Range Interactions in Vision Transformers »
Jianwei Yang · Chunyuan Li · Pengchuan Zhang · Xiyang Dai · Bin Xiao · Lu Yuan · Jianfeng Gao -
2021 Poster: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer »
Ge Yang · Edward Hu · Igor Babuschkin · Szymon Sidor · Xiaodong Liu · David Farhi · Nick Ryder · Jakub Pachocki · Weizhu Chen · Jianfeng Gao -
2021 : WebQA Competition + Q&A »
Yingshan CHANG · Yonatan Bisk · Mridu Narang · Levi Melnick · Jianfeng Gao · Hisami Suzuki · Guihong Cao -
2020 Workshop: Causal Discovery and Causality-Inspired Machine Learning »
Biwei Huang · Sara Magliacane · Kun Zhang · Danielle Belgrave · Elias Bareinboim · Daniel Malinsky · Thomas Richardson · Christopher Meek · Peter Spirtes · Bernhard Schölkopf -
2020 Poster: Uncertainty-aware Self-training for Few-shot Text Classification »
Subhabrata Mukherjee · Ahmed Awadallah -
2020 Spotlight: Uncertainty-aware Self-training for Few-shot Text Classification »
Subhabrata Mukherjee · Ahmed Awadallah -
2019 Poster: Unified Language Model Pre-training for Natural Language Understanding and Generation »
Li Dong · Nan Yang · Wenhui Wang · Furu Wei · Xiaodong Liu · Yu Wang · Jianfeng Gao · Ming Zhou · Hsiao-Wuen Hon -
2018 Poster: M-Walk: Learning to Walk over Graphs using Monte Carlo Tree Search »
Yelong Shen · Jianshu Chen · Po-Sen Huang · Yuqing Guo · Jianfeng Gao -
2018 Poster: Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization »
Yizhe Zhang · Michel Galley · Jianfeng Gao · Zhe Gan · Xiujun Li · Chris Brockett · Bill Dolan -
2018 Poster: Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models »
Minjia Zhang · Wenhan Wang · Xiaodong Liu · Jianfeng Gao · Yuxiong He -
2017 : Invited Talk: Microsoft (Asli and Jianfeng) »
Jianfeng Gao -
2017 Poster: Mean Field Residual Networks: On the Edge of Chaos »
Ge Yang · Samuel Schoenholz -
2015 Poster: End-to-end Learning of LDA by Mirror-Descent Back Propagation over a Deep Architecture »
Jianshu Chen · Ji He · Yelong Shen · Lin Xiao · Xiaodong He · Jianfeng Gao · Xinying Song · Li Deng -
2014 Poster: Recursive Inversion Models for Permutations »
Christopher Meek · Marina Meila -
2011 Poster: A Model for Temporal Dependencies in Event Streams »
Asela Gunawardana · Christopher Meek · Puyang Xu -
2010 Spotlight: Exact inference and learning for cumulative distribution functions on loopy graphs »
Jim C Huang · Nebojsa Jojic · Christopher Meek -
2010 Poster: Exact inference and learning for cumulative distribution functions on loopy graphs »
Jim C Huang · Nebojsa Jojic · Christopher Meek -
2008 Poster: MAS: a multiplicative approximation scheme for probabilistic inference »
Ydo Wexler · Christopher Meek -
2008 Oral: MAS: a multiplicative approximation scheme for probabilistic inference »
Ydo Wexler · Christopher Meek