Timezone: »

 
Contributed Talk 1: Question Generation With Deep Reinforcement Learning for Education
Loïc Kwate Dassi

Mon Dec 07 07:05 AM -- 07:15 AM (PST) @

We present in this work a Deep Reinforcement Learning Based Sequence-to-1Sequence model for Natural Question Generation. The question Generation task2aims to generate questions according to a text that serves as context and the answer.3Generate a question is a difficult task because it requires first a well understanding4of the context and the relation with the provided answer, then requires the ability5to generate natural questions as humans. The question should be syntactically,6semantically correct, and correlated with the context and answer. Based on these7constraints we used first Attention models based on Transformers specifically8Google T5 to address the Task of Natural Language Understanding and Natural9Language Generation, then used an Evaluator formed by the mixture of the Cross-10Entropy loss function and a Reinforcement Learning Loss. The aim of this hybrid11evaluator is to drive the training by ensuring that the generated questions are12syntactically and semantically correct. To train our model we used the benchmark13dataset for Reading Comprehension of Text SQUAD. As an evaluation metric,14we use the new State-of-the-Art evaluation metric NUBIA that provides a great15indicator to measure the linguistic similarity between two sentences. There are16many use cases of Question Generation, especially in education it can be used to17improve Question Answering dataset, it can also be used to improve learning by18supporting the student self-taught and help the teachers to design exams.

Author Information

Loïc Kwate Dassi (GRENOBLE INP ENSIMAG)

More from the Same Authors