Skip to yearly menu bar Skip to main content

KeyNote Talk
Workshop: Second Workshop on Efficient Natural Language and Speech Processing (ENLSP-II)

Do we still need inductive biases after Transformer language models?

Siva Reddy


In this talk, I will explore the role of inductive biases when fine-tuning large Transformer language models in three different scenarios: when output space is structured, for example, semantic parsing from language to code; when performing multi-task learning where tasks may share some latent structure, e.g., different semantic tasks like question answering and text entailment may share common reasoning skills; when the input involves a higher-order (latent) structure such as negation. It is not always the case that inductive biases help. Come with your wisest/wildest answers.

Chat is not available.