Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Latinx in AI

Task-Specific or Task-Agnostic? A Statistical Inquiry into BERT for Human Trafficking Risk Prediction

Ana Paula Arguelles Terron · Jorge Yero Salazar · Pablo Rivas


Abstract:

The pervasive issue of human trafficking has increasingly manifested through digital platforms, particularly in the form of textual online advertisements. Leveraging Natural Language Processing (NLP) for risk assessment in this domain has garnered significant attention. This study presents a comprehensive empirical evaluation of machine learning models fine-tuned for emotion and sentiment analysis tasks, specifically utilizing the \textit{BERT-Base Uncased} and \textit{DistilBERT} architectures. These models are rigorously compared against a baseline model, also fine-tuned on the \textit{BERT-Base Uncased} architecture, for the task of human trafficking risk prediction. Employing robust statistical methodologies, namely the Friedman and Nemenyi tests, we scrutinize the performance metrics of these models. Our findings indicate that while task-specific fine-tuned models exhibit promising results, they do not statistically outperform the baseline model in the human trafficking risk prediction task. This research not only contributes to the growing body of work in NLP applications for social good but also provides valuable insights for future research directions in the field.

Chat is not available.