Timezone: »
Machine Learning has been successfully applied in systems applications such as memory prefetching and caching, where learned models have been shown to outperform heuristics. However, the lack of understanding the inner workings of these models -- interpretability -- remains a major obstacle for adoption in real-world deployments. Understanding a model's behavior can help system administrators and developers gain confidence in the model, understand risks, and debug unexpected behavior in production. Interpretability for models used in computer systems poses a particular challenge: Unlike ML models trained on images or text, the input domain (e.g., memory access patterns, program counters) is not immediately interpretable. A major challenge is therefore to explain the model in terms of concepts that are approachable to a human practitioner. By analyzing a state-of-the-art caching model, we provide evidence that the model has learned concepts beyond simple statistics that can be leveraged for explanations. Our work provides a first step towards understanding ML models in systems and highlights both promises and challenges of this emerging research area.
Author Information
Leon Sixt (Freie Universität Berlin)
Evan Liu (Stanford University)
Marie Pellat (Google)
James Wexler
Milad Hashemi (Google)
Been Kim (Google)
Martin Maas (Google)
More from the Same Authors
-
2020 : Decoupling Exploration and Exploitation in Meta-Reinforcement Learning without Sacrifices »
Evan Liu -
2021 : Two Sides of the Same Coin: Heterophily and Oversmoothing in Graph Convolutional Neural Networks »
Yujun Yan · Milad Hashemi · Kevin Swersky · Yaoqing Yang · Danai Koutra -
2022 Panel: Panel 4A-4: Giving Feedback on… & Computationally Efficient Horizon-Free… »
Dongruo Zhou · Evan Liu -
2022 Workshop: Machine Learning for Systems »
Neel Kant · Martin Maas · Azade Nova · Benoit Steiner · Xinlei XU · Dan Zhang -
2022 Poster: Learning Options via Compression »
Yiding Jiang · Evan Liu · Benjamin Eysenbach · J. Zico Kolter · Chelsea Finn -
2022 Poster: Giving Feedback on Interactive Student Programs with Meta-Exploration »
Evan Liu · Moritz Stephan · Allen Nie · Chris Piech · Emma Brunskill · Chelsea Finn -
2021 : Closing Remarks »
Jonathan Raiman · Mimee Xu · Martin Maas · Anna Goldie · Azade Nova · Benoit Steiner -
2021 : Data-Driven Offline Optimization for Architecting Hardware Accelerators »
Aviral Kumar · Amir Yazdanbakhsh · Milad Hashemi · Kevin Swersky · Sergey Levine -
2021 : Opening Remarks »
Jonathan Raiman · Anna Goldie · Benoit Steiner · Azade Nova · Martin Maas · Mimee Xu -
2021 Workshop: ML For Systems »
Benoit Steiner · Jonathan Raiman · Martin Maas · Azade Nova · Mimee Xu · Anna Goldie -
2020 Workshop: Machine Learning for Systems »
Anna Goldie · Azalia Mirhoseini · Jonathan Raiman · Martin Maas · Xinlei XU -
2020 Poster: Debugging Tests for Model Explanations »
Julius Adebayo · Michael Muelly · Ilaria Liccardi · Been Kim -
2020 Poster: Neural Execution Engines: Learning to Execute Subroutines »
Yujun Yan · Kevin Swersky · Danai Koutra · Parthasarathy Ranganathan · Milad Hashemi -
2020 Poster: On Completeness-aware Concept-Based Explanations in Deep Neural Networks »
Chih-Kuan Yeh · Been Kim · Sercan Arik · Chun-Liang Li · Tomas Pfister · Pradeep Ravikumar -
2019 Workshop: ML For Systems »
Milad Hashemi · Azalia Mirhoseini · Anna Goldie · Kevin Swersky · Xinlei XU · Jonathan Raiman · Jonathan Raiman -
2019 Poster: Towards Automatic Concept-based Explanations »
Amirata Ghorbani · James Wexler · James Zou · Been Kim -
2019 Poster: Visualizing and Measuring the Geometry of BERT »
Emily Reif · Ann Yuan · Martin Wattenberg · Fernanda Viegas · Andy Coenen · Adam Pearce · Been Kim -
2019 Poster: A Benchmark for Interpretability Methods in Deep Neural Networks »
Sara Hooker · Dumitru Erhan · Pieter-Jan Kindermans · Been Kim -
2018 Workshop: Machine Learning for Systems »
Anna Goldie · Azalia Mirhoseini · Jonathan Raiman · Kevin Swersky · Milad Hashemi -
2018 : Interpretability for when NOT to use machine learning by Been Kim »
Been Kim -
2018 Poster: Human-in-the-Loop Interpretability Prior »
Isaac Lage · Andrew Ross · Samuel J Gershman · Been Kim · Finale Doshi-Velez -
2018 Spotlight: Human-in-the-Loop Interpretability Prior »
Isaac Lage · Andrew Ross · Samuel J Gershman · Been Kim · Finale Doshi-Velez -
2018 Poster: Sanity Checks for Saliency Maps »
Julius Adebayo · Justin Gilmer · Michael Muelly · Ian Goodfellow · Moritz Hardt · Been Kim -
2018 Spotlight: Sanity Checks for Saliency Maps »
Julius Adebayo · Justin Gilmer · Michael Muelly · Ian Goodfellow · Moritz Hardt · Been Kim -
2018 Poster: To Trust Or Not To Trust A Classifier »
Heinrich Jiang · Been Kim · Melody Guan · Maya Gupta