Timezone: »
Machine Learning (ML) research has focused on maximizing the accuracy of predictive tasks. ML models, however, are increasingly more complex, resource intensive, and costlier to deploy in resource-constrained environments. These issues are exacerbated for prediction tasks with sequential classification on progressively transitioned stages with “happens-before” relation between them.We argue that it is possible to “unfold” a monolithic single multi-class classifier, typically trained for all stages using all data, into a series of single-stage classifiers. Each single- stage classifier can be cascaded gradually from cheaper to more expensive binary classifiers that are trained using only the necessary data modalities or features required for that stage. UnfoldML is a cost-aware and uncertainty-based dynamic 2D prediction pipeline for multi-stage classification that enables (1) navigation of the accuracy/cost tradeoff space, (2) reducing the spatio-temporal cost of inference by orders of magnitude, and (3) early prediction on proceeding stages. UnfoldML achieves orders of magnitude better cost in clinical settings, while detecting multi- stage disease development in real time. It achieves within 0.1% accuracy from the highest-performing multi-class baseline, while saving close to 20X on spatio- temporal cost of inference and earlier (3.5hrs) disease onset prediction. We also show that UnfoldML generalizes to image classification, where it can predict different level of labels (from coarse to fine) given different level of abstractions of a image, saving close to 5X cost with as little as 0.4% accuracy reduction.
Author Information
Yanbo Xu (Georgia Institute of Technology)
Alind Khare (Georgia Institute of Technology)
Glenn Matlin (Georgia Institute of Technology)
Monish Ramadoss (Georgia Institute of Technology)
Rishikesan Kamaleswaran (Emory University)
Chao Zhang (Georgia Institute of Technology)
Alexey Tumanov
More from the Same Authors
-
2022 : Shift-Robust Node Classification via Graph Clustering Co-training »
Qi Zhu · Chao Zhang · Chanyoung Park · Carl Yang · Jiawei Han -
2022 Poster: End-to-end Stochastic Optimization with Energy-based Model »
Lingkai Kong · Jiaming Cui · Yuchen Zhuang · Rui Feng · B. Aditya Prakash · Chao Zhang -
2021 Poster: When in Doubt: Neural Non-Parametric Uncertainty Quantification for Epidemic Forecasting »
Harshavardhan Kamarthi · Lingkai Kong · Alexander Rodriguez · Chao Zhang · B. Aditya Prakash -
2021 Poster: Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization »
Qi Zhu · Carl Yang · Yidan Xu · Haonan Wang · Chao Zhang · Jiawei Han -
2019 Poster: Spherical Text Embedding »
Yu Meng · Jiaxin Huang · Guangyuan Wang · Chao Zhang · Honglei Zhuang · Lance Kaplan · Jiawei Han