Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics

#45: Assessing LLMs for Moral Value Pluralism - (Spoiler Alert: They’re not There Yet)

Sonja Schmer-Galunder · Noam Benkler · Drisana Mosaphir · Andrew Smart · Scott Friedman

Keywords: [ AI systems ] [ Recognizing Value Resonance ] [ cultural bias ] [ Moral Value Plurality ] [ LLMs ]


Abstract:

Moral values are important indicators of socio-cultural norms and behavior and guide our moral judgment and identity. Decades of social science research have developed and refined some widely-accepted surveys, such as the World Values Survey (WVS), that elicit value judgments from direct questions, enabling social scientists to measure higher-level moral values and even cultural value distance.While WVS is accepted as an explicit assessment of values, we lack methods for assessing the plurality of implicit moral and cultural values in media, e.g., encountered in social media, political rhetoric, narratives, and generated by AI systems such as the large language models (LLMs) that are taking foothold in our daily lives. As we consume online content and utilize LLM outputs, we might ask, practically or academically, which moral values are being implicitly promoted or undercut, or---in the case of LLMs---if they are intending to represent a cultural identity, are they doing so consistently? In this paper we utilize a Recognizing Value Resonance (RVR) NLP model to identify WVS values that resonate and conflict with a passage of text. We apply RVR to the text generated by LLMs to characterize implicit moral values, allowing us to quantify the moral/cultural distance between LLMs and various demographics that have been surveyed using the WVS. Our results highlight value misalignment for non-WEIRD nations from various clusters of the WVS cultural map, as well as age misalignment across nations.

Chat is not available.