`

Timezone: »

 
Poster
Wisdom of the Ensemble: Improving Consistency of Deep Learning Models
Lijing Wang · Dipanjan Ghosh · Maria Gonzalez Diaz · Ahmed Farahat · Mahbubul Alam · Chetan Gupta · Jiangzhuo Chen · Madhav Marathe

Wed Dec 09 09:00 PM -- 11:00 PM (PST) @ Poster Session 4 #1174

Deep learning classifiers are assisting humans in making decisions and hence the user's trust in these models is of paramount importance. Trust is often a function of constant behavior. From an AI model perspective it means given the same input the user would expect the same output, especially for correct outputs, or in other words consistently correct outputs. This paper studies a model behavior in the context of periodic retraining of deployed models where the outputs from successive generations of the models might not agree on the correct labels assigned to the same input. We formally define consistency and correct-consistency of a learning model. We prove that consistency and correct-consistency of an ensemble learner is not less than the average consistency and correct-consistency of individual learners and correct-consistency can be improved with a probability by combining learners with accuracy not less than the average accuracy of ensemble component learners. To validate the theory using three datasets and two state-of-the-art deep learning classifiers we also propose an efficient dynamic snapshot ensemble method and demonstrate its value. Code for our algorithm is available at https://github.com/christa60/dynens.

Author Information

Lijing Wang (University of Virginia)
Dipanjan Ghosh (Industrial AI Labs, Hitachi Americas Ltd.)
Maria Gonzalez Diaz (Industrial AI Lab, Hitachi America Ltd.)
Ahmed Farahat (Industrial AI Lab, Hitachi America, Ltd. R&D)
Mahbubul Alam (Industrial AI Lab, Hitachi America, Ltd. R&D)
Chetan Gupta (Industrial AI Lab, Hitachi America R&D, Hitachi Americas Ltd.)
Jiangzhuo Chen (University of Virginia)
Madhav Marathe (Biocomplexity Institute & Initiative, University of Virginia)