Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Synthetic Data Generation with Generative AI

GeMQuAD : Generating Multilingual Question Answering Datasets from Large Language Models using Few Shot Learning

Amani Namboori · Shivam Mangale · Andy Rosenbaum · Saleh Soltan

Keywords: [ low resource ] [ In context learning ] [ Multiligual ] [ FSL ] [ Low cost ] [ ICL ] [ GeMQuAD ] [ Large language models ] [ few shot learning ] [ question answering ] [ LLM ] [ Generative AI ] [ Extractive QA ] [ QA ]


Abstract:

The emergence of Large Language Models (LLMs) with capabilities like In-Context Learning (ICL) has ushered in new possibilities for data generation across various domains while minimizing the need for extensive data collection and modeling techniques. Researchers have explored ways to use this generated synthetic data to optimize smaller student models for reduced deployment costs and lower latency in downstream tasks. However, ICL-generated data often suffers from low quality as the task specificity is limited with few examples used in ICL. In this paper, we propose GeMQuAD - a semi-supervised learning approach, extending the WeakDAP framework, applied to a dataset generated through ICL with just one example in the target language using AlexaTM 20B Seq2Seq LLM. Through our approach, we iteratively identify high-quality data to enhance model performance, especially for low-resource multilingual setting in the context of Extractive Question Answering task. Our framework surpasses the performance of baseline model trained on an English-only dataset by 5.05/6.50 points in F1/Exact Match(EM) for Hindi and by 3.81/3.69 points in F1/EM for Spanish on MLQA dataset. Notably, our approach uses a pre-trained LLM with no additional fine-tuning of LLM using only one annotated example in ICL to generate data, keeping the development process cost effective.

Chat is not available.