`

Timezone: »

 
Poster
Backward-Compatible Prediction Updates: A Probabilistic Approach
Frederik Träuble · Julius von Kügelgen · Matthäus Kleindessner · Francesco Locatello · Bernhard Schölkopf · Peter Gehler

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

When machine learning systems meet real world applications, accuracy is only one of several requirements. In this paper, we assay a complementary perspective originating from the increasing availability of pre-trained and regularly improving state-of-the-art models. While new improved models develop at a fast pace, downstream tasks vary more slowly or stay constant. Assume that we have a large unlabelled data set for which we want to maintain accurate predictions. Whenever a new and presumably better ML models becomes available, we encounter two problems: (i) given a limited budget, which data points should be re-evaluated using the new model?; and (ii) if the new predictions differ from the current ones, should we update? Problem (i) is about compute cost, which matters for very large data sets and models. Problem (ii) is about maintaining consistency of the predictions, which can be highly relevant for downstream applications; our demand is to avoid negative flips, i.e., changing correct to incorrect predictions. In this paper, we formalize the Prediction Update Problem and present an efficient probabilistic approach as answer to the above questions. In extensive experiments on standard classification benchmark data sets, we show that our method outperforms alternative strategies along key metrics for backward-compatible prediction updates.

Author Information

Frederik Träuble (Max Planck Institute for Intelligent Systems)
Julius von Kügelgen (Max Planck Institute for Intelligent Systems Tübingen & University of Cambridge)
Matthäus Kleindessner (Amazon AWS)
Francesco Locatello (Amazon)
Bernhard Schölkopf (MPI for Biological Cybernetics)
Peter Gehler (Max Planck Institute Informatik)

More from the Same Authors