Confidence-Gated LLM Synthesis for Enhanced Multi-Class Sentiment Analysis in Financial Texts
Abstract
Accurate sentiment analysis of news headlines, particularly in specialized domains like finance, is challenging due to textual nuances and inherent ambiguities. While Large Language Models (LLMs) excel at text understanding, their zero-shot performance can be inconsistent. We propose a novel framework where an LLM acts as an intelligent synthesizer, combining raw text semantics with confidence-gated probabilistic outputs from a specialized intermediate multi-class sentiment classifier. This confidence-gating mechanism dynamically adjusts the information provided to the LLM, guiding its reasoning process. Experiments on established financial sentiment datasets, SentFiN, demonstrate that our approach significantly outperforms zero-shot LLM baselines and traditional machine learning combiners, when leveraging insights from a confident intermediate model.