Poster
DataComp-LM: In search of the next generation of training sets for language models
Jeffrey Li · Alex Fang · Georgios Smyrnis · Maor Ivgi · Matt Jordan · Samir Yitzhak Gadre · Hritik Bansal · Etash Guha · Sedrick Scott Keh · Kushal Arora · Saurabh Garg · Rui Xin · Niklas Muennighoff · Reinhard Heckel · Jean Mercat · Mayee Chen · Suchin Gururangan · Mitchell Wortsman · Alon Albalak · Yonatan Bitton · Marianna Nezhurina · Amro Abbas · Cheng-Yu Hsieh · Dhruba Ghosh · Josh Gardner · Maciej Kilian · Hanlin Zhang · Rulin Shao · Sarah Pratt · Sunny Sanyal · Gabriel Ilharco · Giannis Daras · Kalyani Marathe · Aaron Gokaslan · Jieyu Zhang · Khyathi Chandu · Thao Nguyen · Igor Vasiljevic · Sham Kakade · Shuran Song · Sujay Sanghavi · Fartash Faghri · Sewoong Oh · Luke Zettlemoyer · Kyle Lo · Alaaeldin El-Nouby · Hadi Pouransari · Alexander Toshev · Stephanie Wang · Dirk Groeneveld · Luca Soldaini · Pang Wei Koh · Jenia Jitsev · Thomas Kollar · Alex Dimakis · Yair Carmon · Achal Dave · Ludwig Schmidt · Vaishaal Shankar
West Ballroom A-D #5109
We introduce DataComp for Language Models, a testbed for controlled dataset experiments with the goal of improving language models.As part of DCLM, we provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations.Participants in the DCLM benchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing atmodel scales ranging from 412M to 7B parameters.As a baseline for DCLM, we conduct extensive experiments and find that model-based filtering is key to assembling a high-quality training set.The resulting dataset, DCLM-Baseline, enables training a 7B parameter language model from scratch to 63% 5-shot accuracy on MMLU with 2T training tokens.Compared to MAP-Neo, the previous state-of-the-art in open-data language models, DCLM-Baseline represents a 6 percentage point improvement on MMLU while being trained with half the compute.Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation. We release the \dclm benchmark, framework, models, and datasets at https://www.datacomp.ai/dclm/