Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Second Workshop on Efficient Natural Language and Speech Processing (ENLSP-II)

An Exploration of Methods for Zero-shot Transfer in Small Language Models

Alon Albalak · Akshat Shrivastava · Chinnadhurai Sankar · Adithya Sagar · Mike Ross

Keywords: [ ENLSP-Main ]


Abstract:

Multi-task learning (MTL), instruction tuning, and prompting have recently been shown to improve the generalizability of large language models to new tasks. However, the benefits of such methods are less well-documented in smaller language models, with some studies finding contradictory results. In this work, we explore and isolate the effects of (i) model size, (ii) general purpose MTL, (iii) in-domain MTL, and (iv) instruction tuning for models with fewer than 500 million parameters. Our experiments demonstrate that general purpose MTL improves performance by 31% on average, with further in-domain MTL improving performance by an additional 37.6% on average. We find that instruction tuning provides a modest 2% performance improvement for small models.

Chat is not available.