Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Medical Imaging meets NeurIPS

Assessing Self-Supervised Pretraining for Multiple Lung Ultrasound Interpretation Tasks

Blake VanBerlo · Brian Li · Jesse Hoey · Alexander Wong


Abstract:

In this study, we investigated whether self-supervised pretraining could produce a neural network feature extractor applicable to multiple tasks in B-mode lung ultrasound analysis. When fine-tuning for three tasks, pretrained models resulted in an improvement of the average across-task area under the receiver operating curve (AUC) by 0.032 and 0.061 on local and external test sets respectively. When training using 1% of the available labels, pretrained models consistently outperformed fully supervised models, with a maximum observed test AUC increase of 0.396 for the task of view classification. Overall, the results indicate that self-supervised pretraining is useful for producing initial weights for lung ultrasound classifiers.

Chat is not available.