Here, we present a severity assessment technique based on an interpretable artificial intelligence (AI) method. Our model builds on an authentic multi-reader dataset of 1208 chest X-rays (CXRs) from 396 patients at Emory University affiliated hospitals with confirmed RT-PCR tests within the course of the study. All the CXRs have been labeled by 6 expert chest radiologists and 2 in-training residents into normal, mild, moderate, and severe classes depending on the consolidation and opacity degrees. We train a convolutional neural network (CNN) using a two-stage transfer-learning approach and show that the model outperforms radiologists and residents over unseen data with an average area under the curve (AUC) of 0.97, 0.92, 0.86, and 0.96 for the normal, mild, moderate, and severe classes, respectively. Finally, we visualize the outputs of the most important filters of the CNN using a pruning method to unlock the black box and provide intuition about the decision-making process of the CNN.