Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning in Structural Biology Workshop

Parameter-Efficient Fine Tuning of Protein Language Models Improves Prediction of Protein-Protein Interactions

Samuel Sledzieski · Meghana Kshirsagar · Rahul Dodhia · Bonnie Berger · Juan Lavista Ferres


Abstract:

Mirroring the massive increase in the size of transformer-based models in natural language processing, proteomics too has seen increasingly large foundational protein language models. As model size increases, the computational and memory footprint of fine-tuning expands out of reach of many academic labs and small biotechs. In this work, we compare fine-tuning of protein language models to training a classifier head on frozen representations, and the parameter-efficient fine tuning method LoRA on the task of predicting protein-protein interactions. We find that using LoRA actually outperforms full fine-tuning while requiring a reduced memory footprint, and that using frozen embeddings remains a viable alternative if computational resources for fine-tuning are impractical.

Chat is not available.