Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Medical Imaging meets NeurIPS

MIMIC-NLE-v2: Can Large Language Models Reason about Chest X-rays?

Maxime Kayser · Oana-Maria Camburu · Thomas Lukasiewicz


Abstract:

Diagnosing medical images requires reasoning: Radiologists usually identify different findings on a scan and then integrate them to form an overall diagnosis in light of the patient's condition. At the same time, large language models (LLMs) have demonstrated remarkable language reasoning skills, and these capabilities are currently being adapted to vision and vision-language problems. In this work, we investigate whether vision-enabled LLMs are capable of reasoning about patient context and radiographic observations to arrive at a diagnosis. In order to achieve this, we first propose MIMIC-NLE-v2, a new chest X-ray dataset for Natural Language Explanations. Next, we compare different methods for training models of up to 30 billion parameters to reason about chest X-rays. We then show how their reasoning capabilities can lead to improvements in other image analysis tasks.

Chat is not available.