Timezone: »
When machine learning systems meet real world applications, accuracy is only one of several requirements. In this paper, we assay a complementary perspective originating from the increasing availability of pre-trained and regularly improving state-of-the-art models. While new improved models develop at a fast pace, downstream tasks vary more slowly or stay constant. Assume that we have a large unlabelled data set for which we want to maintain accurate predictions. Whenever a new and presumably better ML models becomes available, we encounter two problems: (i) given a limited budget, which data points should be re-evaluated using the new model?; and (ii) if the new predictions differ from the current ones, should we update? Problem (i) is about compute cost, which matters for very large data sets and models. Problem (ii) is about maintaining consistency of the predictions, which can be highly relevant for downstream applications; our demand is to avoid negative flips, i.e., changing correct to incorrect predictions. In this paper, we formalize the Prediction Update Problem and present an efficient probabilistic approach as answer to the above questions. In extensive experiments on standard classification benchmark data sets, we show that our method outperforms alternative strategies along key metrics for backward-compatible prediction updates.
Author Information
Frederik Träuble (Max Planck Institute for Intelligent Systems)
Julius von Kügelgen (Max Planck Institute for Intelligent Systems Tübingen & University of Cambridge)
Matthäus Kleindessner (Amazon AWS)
Francesco Locatello (Amazon)
Bernhard Schölkopf (MPI for Intelligent Systems, Tübingen)
Peter Gehler (Amazon)
More from the Same Authors
-
2021 Spotlight: Iterative Teaching by Label Synthesis »
Weiyang Liu · Zhen Liu · Hanchen Wang · Liam Paull · Bernhard Schölkopf · Adrian Weller -
2021 Spotlight: DiBS: Differentiable Bayesian Structure Learning »
Lars Lorch · Jonas Rothfuss · Bernhard Schölkopf · Andreas Krause -
2022 : Active Bayesian Causal Inference »
Christian Toth · Lars Lorch · Christian Knoll · Andreas Krause · Franz Pernkopf · Robert Peharz · Julius von Kügelgen -
2022 : A Causal Framework to Quantify Robustness of Mathematical Reasoning with Language Models »
Alessandro Stolfo · Zhijing Jin · Kumar Shridhar · Bernhard Schölkopf · Mrinmaya Sachan -
2022 : Evaluating vaccine allocation strategies using simulation-assisted causal modelling »
Armin Kekić · Jonas Dehning · Luigi Gresele · Julius von Kügelgen · Viola Priesemann · Bernhard Schölkopf -
2022 : Active Bayesian Causal inference »
Christian Toth · Lars Lorch · Christian Knoll · Andreas Krause · Franz Pernkopf · Robert Peharz · Julius von Kügelgen -
2022 : A General-Purpose Neural Architecture for Geospatial Systems »
Martin Weiss · Nasim Rahaman · Frederik Träuble · Francesco Locatello · Alexandre Lacoste · Yoshua Bengio · Erran Li Li · Chris Pal · Bernhard Schölkopf -
2023 : A Sparsity Principle for Partially Observable Causal Representation Learning »
Danru Xu · Dingling Yao · Sébastien Lachapelle · Perouz Taslakian · Julius von Kügelgen · Francesco Locatello · Sara Magliacane -
2023 : Independent Mechanism Analysis and the Manifold Hypothesis: Identifiability and Genericity »
Shubhangi Ghosh · Luigi Gresele · Julius von Kügelgen · Michel Besserve · Bernhard Schölkopf -
2023 : Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations »
Cian Eastwood · Julius von Kügelgen · Linus Ericsson · Diane Bouchacourt · Pascal Vincent · Mark Ibrahim · Bernhard Schölkopf -
2023 : Multi-View Causal Representation Learning with Partial Observability »
Dingling Yao · Danru Xu · Sébastien Lachapelle · Sara Magliacane · Perouz Taslakian · Georg Martius · Julius von Kügelgen · Francesco Locatello -
2023 : Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations »
Cian Eastwood · Julius von Kügelgen · Linus Ericsson · Diane Bouchacourt · Pascal Vincent · Bernhard Schölkopf · Mark Ibrahim -
2023 : Invited talk by Julius von Kügelgen (MPI Tübingen) »
Julius von Kügelgen -
2023 : Invited talk by Francesco Locatello (ISTA) »
Francesco Locatello -
2023 Workshop: UniReps: Unifying Representations in Neural Models »
Marco Fumero · Emanuele Rodolà · Francesco Locatello · Gintare Karolina Dziugaite · Mathilde Caron -
2023 Poster: Sample Complexity Bounds for Score-Matching: Causal Discovery and Generative Modeling »
Zhenyu Zhu · Francesco Locatello · Volkan Cevher -
2023 Poster: Latent Space Translation via Semantic Alignment »
Valentino Maiorca · Luca Moschella · Antonio Norelli · Marco Fumero · Francesco Locatello · Emanuele Rodolà -
2023 Poster: ASIF: Coupled Data Turns Unimodal Models to Multimodal without Training »
Antonio Norelli · Marco Fumero · Valentino Maiorca · Luca Moschella · Emanuele Rodolà · Francesco Locatello -
2023 Poster: Nonparametric Identifiability of Causal Representations from Unknown Interventions »
Julius von Kügelgen · Michel Besserve · Liang Wendong · Luigi Gresele · Armin Kekić · Elias Bareinboim · David Blei · Bernhard Schölkopf -
2023 Poster: Assumption violations in causal discovery and the robustness of score matching »
Francesco Montagna · Atalanti Mastakouri · Elias Eulig · Nicoletta Noceti · Lorenzo Rosasco · Dominik Janzing · Bryon Aragam · Francesco Locatello -
2023 Poster: Leveraging sparse and shared feature activations for disentangled representation learning »
Marco Fumero · Florian Wenzel · Luca Zancato · Alessandro Achille · Emanuele Rodolà · Stefano Soatto · Bernhard Schölkopf · Francesco Locatello -
2023 Poster: Causal Component Analysis »
Liang Wendong · Armin Kekić · Julius von Kügelgen · Simon Buchholz · Michel Besserve · Luigi Gresele · Bernhard Schölkopf -
2023 Poster: Rotating Features for Object Discovery »
Sindy Löwe · Phillip Lippe · Francesco Locatello · Max Welling -
2023 Poster: Spuriosity Didn’t Kill the Classifier: Using Invariant Predictions to Harness Spurious Features »
Cian Eastwood · Shashank Singh · Andrei L Nicolicioiu · Marin Vlastelica Pogančić · Julius von Kügelgen · Bernhard Schölkopf -
2023 Oral: Rotating Features for Object Discovery »
Sindy Löwe · Phillip Lippe · Francesco Locatello · Max Welling -
2022 Spotlight: Lightning Talks 1A-3 »
Kimia Noorbakhsh · Ronan Perry · Qi Lyu · Jiawei Jiang · Christian Toth · Olivier Jeunen · Xin Liu · Yuan Cheng · Lei Li · Manuel Rodriguez · Julius von Kügelgen · Lars Lorch · Nicolas Donati · Lukas Burkhalter · Xiao Fu · Zhongdao Wang · Songtao Feng · Ciarán Gilligan-Lee · Rishabh Mehrotra · Fangcheng Fu · Jing Yang · Bernhard Schölkopf · Ya-Li Li · Christian Knoll · Maks Ovsjanikov · Andreas Krause · Shengjin Wang · Hong Zhang · Mounia Lalmas · Bolin Ding · Bo Du · Yingbin Liang · Franz Pernkopf · Robert Peharz · Anwar Hithnawi · Julius von Kügelgen · Bo Li · Ce Zhang -
2022 Spotlight: Active Bayesian Causal Inference »
Christian Toth · Lars Lorch · Christian Knoll · Andreas Krause · Franz Pernkopf · Robert Peharz · Julius von Kügelgen -
2022 Spotlight: Causal Discovery in Heterogeneous Environments Under the Sparse Mechanism Shift Hypothesis »
Ronan Perry · Julius von Kügelgen · Bernhard Schölkopf -
2022 Spotlight: Embrace the Gap: VAEs Perform Independent Mechanism Analysis »
Patrik Reizinger · Luigi Gresele · Jack Brady · Julius von Kügelgen · Dominik Zietlow · Bernhard Schölkopf · Georg Martius · Wieland Brendel · Michel Besserve -
2022 Poster: Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks »
Michael Lohaus · Matthäus Kleindessner · Krishnaram Kenthapadi · Francesco Locatello · Chris Russell -
2022 Poster: Probable Domain Generalization via Quantile Risk Minimization »
Cian Eastwood · Alexander Robey · Shashank Singh · Julius von Kügelgen · Hamed Hassani · George J. Pappas · Bernhard Schölkopf -
2022 Poster: Causal Discovery in Heterogeneous Environments Under the Sparse Mechanism Shift Hypothesis »
Ronan Perry · Julius von Kügelgen · Bernhard Schölkopf -
2022 Poster: Embrace the Gap: VAEs Perform Independent Mechanism Analysis »
Patrik Reizinger · Luigi Gresele · Jack Brady · Julius von Kügelgen · Dominik Zietlow · Bernhard Schölkopf · Georg Martius · Wieland Brendel · Michel Besserve -
2022 Poster: Active Bayesian Causal Inference »
Christian Toth · Lars Lorch · Christian Knoll · Andreas Krause · Franz Pernkopf · Robert Peharz · Julius von Kügelgen -
2021 : Boxhead: A Dataset for Learning Hierarchical Representations »
Yukun Chen · Andrea Dittadi · Frederik Träuble · Stefan Bauer · Bernhard Schölkopf -
2021 : Julius von Kügelgen - Independent mechanism analysis, a new concept? »
Julius von Kügelgen -
2021 Poster: Dynamic Inference with Neural Interpreters »
Nasim Rahaman · Muhammad Waleed Gondal · Shruti Joshi · Peter Gehler · Yoshua Bengio · Francesco Locatello · Bernhard Schölkopf -
2021 Poster: Causal Influence Detection for Improving Efficiency in Reinforcement Learning »
Maximilian Seitzer · Bernhard Schölkopf · Georg Martius -
2021 Poster: Independent mechanism analysis, a new concept? »
Luigi Gresele · Julius von Kügelgen · Vincent Stimper · Bernhard Schölkopf · Michel Besserve -
2021 Poster: Iterative Teaching by Label Synthesis »
Weiyang Liu · Zhen Liu · Hanchen Wang · Liam Paull · Bernhard Schölkopf · Adrian Weller -
2021 Poster: The Inductive Bias of Quantum Kernels »
Jonas Kübler · Simon Buchholz · Bernhard Schölkopf -
2021 Poster: Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style »
Julius von Kügelgen · Yash Sharma · Luigi Gresele · Wieland Brendel · Bernhard Schölkopf · Michel Besserve · Francesco Locatello -
2021 Poster: DiBS: Differentiable Bayesian Structure Learning »
Lars Lorch · Jonas Rothfuss · Bernhard Schölkopf · Andreas Krause -
2021 Poster: Regret Bounds for Gaussian-Process Optimization in Large Domains »
Manuel Wuethrich · Bernhard Schölkopf · Andreas Krause -
2019 : Bernhard Schölkopf »
Bernhard Schölkopf -
2018 : Learning Independent Mechanisms »
Bernhard Schölkopf