Timezone: »
Do we still need inductive biases after Transformer language models?
Siva Reddy
Fri Dec 02 02:05 PM -- 02:35 PM (PST) @
In this talk, I will explore the role of inductive biases when fine-tuning large Transformer language models in three different scenarios: when output space is structured, for example, semantic parsing from language to code; when performing multi-task learning where tasks may share some latent structure, e.g., different semantic tasks like question answering and text entailment may share common reasoning skills; when the input involves a higher-order (latent) structure such as negation. It is not always the case that inductive biases help. Come with your wisest/wildest answers.
Author Information
Siva Reddy (McGill University)
More from the Same Authors
-
2021 : Visually Grounded Reasoning across Languages and Cultures »
Fangyu Liu · Emanuele Bugliarello · Edoardo Ponti · Siva Reddy · Desmond Elliott -
2021 Poster: End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering »
Devendra Singh · Siva Reddy · Will Hamilton · Chris Dyer · Dani Yogatama