Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Medical Imaging Meets NeurIPS

Autoencoder Image Compression Algorithm for Reduction of Resource Requirements

Young Joon Kwon


Abstract:

Exponentially increasing amounts of compute resources are used in the start of the art machine learning (ML) models. We designed a lightweight medical imaging compression machine learning algorithm with preserved diagnostic utility. Our compression algorithm was a two-level, vector quantized variational autoencoder (VQ-VAE-2). We trained our algorithm in a self-supervised manner with CheXpert radiographs and externally validated with previously unseen MIMIC-CXR radiographs. We also used the compressed latent vectors or the reconstructed CheXpert images as inputs to train a DenseNet-121 classifier. VQ-VAE achieved 2.5 times the compression ratio with similar Fréchet inception distance as that of the current JPEG2000 standard. The classifier trained on latent vectors has similar AUROC as that of the model trained on original images. Model training with latent vectors required 6.2% of memory and compute and 48.5% time per epoch compared to training with original images. Autoencoders can decrease resource requirements for future ML research.

Chat is not available.