Skip to yearly menu bar Skip to main content


Poster

CultureLLM: Incorporating Cultural Differences into Large Language Models

Cheng Li · Mengzhuo Chen · Jindong Wang · Sunayana Sitaram · Xing Xie

West Ballroom A-D #5504
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Large language models (LLMs) have been observed to exhibit bias towards certain cultures due to the predominance of training data obtained from English corpora. Considering that multilingual cultural data is often expensive to procure, existing methodologies address this challenge through prompt engineering or culture-specific pre-training. However, these strategies may neglect the knowledge deficiency of low-resource cultures and necessitate substantial computing resources. In this paper, we propose CultureLLM, a cost-effective solution to integrate cultural differences into LLMs. CultureLLM employs the World Value Survey (WVS) as seed data and generates semantically equivalent training data through the proposed semantic data augmentation. Utilizing only $50$ seed samples from WVS with augmented data, we fine-tune culture-specific LLMs as well as a unified model (CultureLLM-One) for $9$ cultures, encompassing both rich and low-resource languages. Extensive experiments conducted on $60$ culture-related datasets reveal that CultureLLM significantly surpasses various counterparts such as GPT-3.5 (by $8.1$\%) and Gemini Pro (by $9.5$\%), demonstrating performance comparable to or exceeding that of GPT-4. Our human study indicates that the generated samples maintain semantic equivalence to the original samples, offering an effective solution for LLMs augmentation. Code is released at https://github.com/Scarelette/CultureLLM.

Live content is unavailable. Log in and register to view live content