Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: AI for Accelerated Materials Design (AI4Mat-2023)

HoneyBee: Progressive Instruction Finetuning of Large Language Models for Materials Science

Yu Song · Santiago Miret · Huan Zhang · Bang Liu

Keywords: [ instructions based finetuning ] [ materials science ] [ feedback based instructions ] [ LLaMa ] [ progressive finetuning ]

[ ] [ Project Page ]
Fri 15 Dec 1:40 p.m. PST — 1:50 p.m. PST

Abstract:

We propose an instruction-based process for trustworthy data curation in materials science (MatSci-Instruct), which we then apply to finetune a LLaMa-based language model targeted for materials science (HoneyBee). MatSci-Instruct helps alleviate the scarcity of relevant, high-quality materials science textual data available in the open literature, and HoneyBee is the first billion-parameter language model specialized to materials science. In MatSci-Instruct we improve the trustworthiness of generated data by prompting multiple commercially available large language models for generation with an Instructor module (e.g. Chat-GPT) and verification from an independent Verifier module (e.g. Claude). Using MatSci-Instruct, we construct a dataset of multiple tasks and measure the quality of our dataset along multiple dimensions, including accuracy against known facts, relevance to materials science, as well as completeness and reasonableness of the data. Moreover, we iteratively generate more targeted instructions in a finetuning-evaluation-feedback loop leading to progressively better performance for our finetuned HoneyBee models. Our evaluation on the MatSci-NLP benchmark shows HoneyBee's outperformance of existing language models on materials science tasks and iterative improvement in successive stages of instruction refinement. We study the quality of HoneyBee's language modeling through automatic evaluation and analyze case studies to further understand the model's capabilities and limitations.

Chat is not available.