Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: WiML Workshop 1

Evaluating the Impact of Embedding Representations on Deception Detection

Ellyn Ayton · Maria Glenski


Abstract:

Contextualized word embeddings underpin most state-of-the-art machine learning models used in natural language processing, natural language understanding, and more. With the ever-increasing number of models, there are a multitude of options to choose from. It can be difficult to assess the strengths, weaknesses, or biases of these models; for the average ML practitioner, pretrained embeddings can seem like black boxes because pretraining requires significant computational resources and time and remove control or identification of biases present in training data that can affect downstream behavior. In this ongoing work, we evaluate the extent to which the choice of pre-trained embeddings impacts downstream performance on a deception detection task.

We evaluate the impact of seven variations of four popular text embedding models — ALBERT (base v2 and XXLarge v2), BERT (base cased, base uncased, and multilingual), DistilBERT (base), and RoBERTa (base) — on deception detection performance. We leverage the huggingface[3] library to train models for each embedding variation that use a consist architecture of an embedding layer, a varying number of LSTM layers, and a final output layer to classify social media posts as deceptive or credible. We use a dataset of social news posts from Twitter and Reddit in 2016 that has been used previously for deception detection evaluations[1,2] that contains 40k posts linked to credible sources and 55k post linked to deceptive sources (e.g., sources that share clickbait, propaganda, disinformation). We reserved 20% of posts for testing and 10% for validation, and tuned hyperparameters using grid search and trained for 100 epochs.

Chat is not available.