Identifying Financial Risk Information with Contrastive Reasoning
Abstract
In specialized domains, humans often compare new problems against similar examples, highlight nuances, and draw conclusions rather than analyzing information in isolation. When applying reasoning in specialized contexts with LLMs on top of a RAG, the pipeline can capture contextually relevant information, but it is not designed to retrieve comparable cases or related problems. While retrieval augmentation is effective at extracting factual information, its outputs in specialized reasoning tasks often remain generic, reflecting broad facts rather than context-specific insights. In finance, it results in generic risks that are true for the majority of companies. To address this limitation, we propose a peer-aware comparative inference layer. Our contrastive approach outperforms the baseline RAG in text generation metrics, such as ROUGE and BERTScore, in comparison with human-generated equity research and risk.