Timezone: »
Most recent progress in natural language understanding (NLU) has been driven, in part, by benchmarks such as GLUE, SuperGLUE, SQuAD, etc. In fact, many NLU models have now matched or exceeded "human-level" performance on many tasks in these benchmarks. Most of these benchmarks, however, give models access to relatively large amounts of labeled data for training. As such, the models are provided far more data than required by humans to achieve strong performance. That has motivated a line of work that focuses on improving few-shot learning performance of NLU models. However, there is a lack of standardized evaluation benchmarks for few-shot NLU resulting in different experimental settings in different papers.To help accelerate this line of work, we introduce CLUES, a benchmark for evaluating the few-shot learning capabilities of NLU models. We demonstrate that while recent models reach human performance when they have access to large amounts of labeled data, there is a huge gap in performance in the few-shot setting for most tasks. We also demonstrate differences between alternative model families and adaptation techniques in the few shot setting. Finally, we discuss several principles and choices in designing the experimental settings for evaluating the true few-shot learning performance and suggest a unified standardized approach to few-shot learning evaluation. We aim to encourage research on NLU models that can generalize to new tasks with a small number of examples. Code and data for CLUES are available at https://github.com/microsoft/CLUES.
Author Information
Subhabrata Mukherjee (Microsoft Research)
Principal Researcher at Microsoft Research leading cross-org initiative for [Efficient AI at Scale]. Our focus is on efficient learning of massive neural networks for both model (e.g., neural architecture search, model compression, sparse and modular learning) and data efficiency (e.g., zero-shot and few-shot learning, semi-supervised learning). We develop state-of-the-art computationally efficient models and techniques to enable AI practitioners, researchers and engineers to use large-scale models in practice. Our technologies have been deployed in several enterprise scenarios including Turing, Bing and Microsoft 365. Honors: 2022 MIT Technology Review Innovators under 35 Semi-finalist (listed in 100 innovators under 35 world-wide) for work on Efficient AI.
Xiaodong Liu (Microsoft)
Guoqing Zheng (Carnegie Mellon University)
Saghar Hosseini (Microsoft Research)
Hao Cheng (Microsoft)
Ge Yang (Microsoft Research)
Christopher Meek (Microsoft Research)
Ahmed Awadallah (MICROSOFT RESEARCH)
I am passionate about using AI and Machine Learning to create intelligent user experiences that connect people to information. I lead a research and incubation team in Microsoft Research Technologies. Our work at the Language and Information Technologies team is focused on creating language understanding and user modeling technologies to enable intelligent experiences in multiple products. Our work has been shipped in several products such as Bing, Cortana, Office 365, and Dynamics 365. I have hands-on experience building and shipping state-of-the-art ML/AI algorithms. I also have experience building and managing world-class teams of scientists and engineers. My research interests are at the intersection of machine learning, language understanding, and information retrieval. A key part of my work involves using Machine Learning to model large-scale text and user behavior data with applications to intelligent assistants, search, user modeling, quality evaluation, recommendation and personalization. I received my Ph.D. from the department of Computer Science and Engineering at the University of Michigan Ann Arbor. I Invented, published, and patented new approaches in language understanding, information retrieval and machine learning. I published 60+ peer-reviewed papers in these areas and I am an inventor on 20+ (granted and pending) patents.
Jianfeng Gao (Microsoft Research, Redmond, WA)
More from the Same Authors
-
2021 Spotlight: Focal Attention for Long-Range Interactions in Vision Transformers »
Jianwei Yang · Chunyuan Li · Pengchuan Zhang · Xiyang Dai · Bin Xiao · Lu Yuan · Jianfeng Gao -
2021 : Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models »
Boxin Wang · Chejian Xu · Shuohang Wang · Zhe Gan · Yu Cheng · Jianfeng Gao · Ahmed Awadallah · Bo Li -
2022 Spotlight: Focal Modulation Networks »
Jianwei Yang · Chunyuan Li · Xiyang Dai · Jianfeng Gao -
2022 Spotlight: ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models »
Chunyuan Li · Haotian Liu · Liunian Li · Pengchuan Zhang · Jyoti Aneja · Jianwei Yang · Ping Jin · Houdong Hu · Zicheng Liu · Yong Jae Lee · Jianfeng Gao -
2022 Spotlight: Fault-Aware Neural Code Rankers »
Jeevana Priya Inala · Chenglong Wang · Mei Yang · Andres Codas · Mark Encarnación · Shuvendu Lahiri · Madanlal Musuvathi · Jianfeng Gao -
2022 Poster: K-LITE: Learning Transferable Visual Models with External Knowledge »
Sheng Shen · Chunyuan Li · Xiaowei Hu · Yujia Xie · Jianwei Yang · Pengchuan Zhang · Zhe Gan · Lijuan Wang · Lu Yuan · Ce Liu · Kurt Keutzer · Trevor Darrell · Anna Rohrbach · Jianfeng Gao -
2022 Poster: Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone »
Zi-Yi Dou · Aishwarya Kamath · Zhe Gan · Pengchuan Zhang · Jianfeng Wang · Linjie Li · Zicheng Liu · Ce Liu · Yann LeCun · Nanyun Peng · Jianfeng Gao · Lijuan Wang -
2022 Poster: ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models »
Chunyuan Li · Haotian Liu · Liunian Li · Pengchuan Zhang · Jyoti Aneja · Jianwei Yang · Ping Jin · Houdong Hu · Zicheng Liu · Yong Jae Lee · Jianfeng Gao -
2022 Poster: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models »
Dongkuan (DK) Xu · Subhabrata Mukherjee · Xiaodong Liu · Debadeepta Dey · Wenhui Wang · Xiang Zhang · Ahmed Awadallah · Jianfeng Gao -
2022 Poster: Focal Modulation Networks »
Jianwei Yang · Chunyuan Li · Xiyang Dai · Jianfeng Gao -
2022 Poster: Fault-Aware Neural Code Rankers »
Jeevana Priya Inala · Chenglong Wang · Mei Yang · Andres Codas · Mark Encarnación · Shuvendu Lahiri · Madanlal Musuvathi · Jianfeng Gao -
2022 Poster: GLIPv2: Unifying Localization and Vision-Language Understanding »
Haotian Zhang · Pengchuan Zhang · Xiaowei Hu · Yen-Chun Chen · Liunian Li · Xiyang Dai · Lijuan Wang · Lu Yuan · Jenq-Neng Hwang · Jianfeng Gao -
2021 : Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models »
Boxin Wang · Chejian Xu · Shuohang Wang · Zhe Gan · Yu Cheng · Jianfeng Gao · Ahmed Awadallah · Bo Li -
2021 Poster: Fairness via Representation Neutralization »
Mengnan Du · Subhabrata Mukherjee · Guanchu Wang · Ruixiang Tang · Ahmed Awadallah · Xia Hu -
2021 Poster: Focal Attention for Long-Range Interactions in Vision Transformers »
Jianwei Yang · Chunyuan Li · Pengchuan Zhang · Xiyang Dai · Bin Xiao · Lu Yuan · Jianfeng Gao -
2021 Poster: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer »
Ge Yang · Edward Hu · Igor Babuschkin · Szymon Sidor · Xiaodong Liu · David Farhi · Nick Ryder · Jakub Pachocki · Weizhu Chen · Jianfeng Gao -
2021 : WebQA Competition + Q&A »
Yingshan CHANG · Yonatan Bisk · Mridu Narang · Levi Melnick · Jianfeng Gao · Hisami Suzuki · Guihong Cao -
2020 Workshop: Causal Discovery and Causality-Inspired Machine Learning »
Biwei Huang · Sara Magliacane · Kun Zhang · Danielle Belgrave · Elias Bareinboim · Daniel Malinsky · Thomas Richardson · Christopher Meek · Peter Spirtes · Bernhard Schölkopf -
2020 Poster: Uncertainty-aware Self-training for Few-shot Text Classification »
Subhabrata Mukherjee · Ahmed Awadallah -
2020 Spotlight: Uncertainty-aware Self-training for Few-shot Text Classification »
Subhabrata Mukherjee · Ahmed Awadallah -
2019 Poster: Unified Language Model Pre-training for Natural Language Understanding and Generation »
Li Dong · Nan Yang · Wenhui Wang · Furu Wei · Xiaodong Liu · Yu Wang · Jianfeng Gao · Ming Zhou · Hsiao-Wuen Hon -
2018 Poster: M-Walk: Learning to Walk over Graphs using Monte Carlo Tree Search »
Yelong Shen · Jianshu Chen · Po-Sen Huang · Yuqing Guo · Jianfeng Gao -
2018 Poster: Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization »
Yizhe Zhang · Michel Galley · Jianfeng Gao · Zhe Gan · Xiujun Li · Chris Brockett · Bill Dolan -
2018 Poster: Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models »
Minjia Zhang · Wenhan Wang · Xiaodong Liu · Jianfeng Gao · Yuxiong He -
2017 : Invited Talk: Microsoft (Asli and Jianfeng) »
Jianfeng Gao -
2017 Poster: Mean Field Residual Networks: On the Edge of Chaos »
Ge Yang · Samuel Schoenholz -
2015 Poster: End-to-end Learning of LDA by Mirror-Descent Back Propagation over a Deep Architecture »
Jianshu Chen · Ji He · Yelong Shen · Lin Xiao · Xiaodong He · Jianfeng Gao · Xinying Song · Li Deng -
2014 Poster: Recursive Inversion Models for Permutations »
Christopher Meek · Marina Meila -
2011 Poster: A Model for Temporal Dependencies in Event Streams »
Asela Gunawardana · Christopher Meek · Puyang Xu -
2010 Spotlight: Exact inference and learning for cumulative distribution functions on loopy graphs »
Jim C Huang · Nebojsa Jojic · Christopher Meek -
2010 Poster: Exact inference and learning for cumulative distribution functions on loopy graphs »
Jim C Huang · Nebojsa Jojic · Christopher Meek -
2008 Poster: MAS: a multiplicative approximation scheme for probabilistic inference »
Ydo Wexler · Christopher Meek -
2008 Oral: MAS: a multiplicative approximation scheme for probabilistic inference »
Ydo Wexler · Christopher Meek