Timezone: »
In any given machine learning problem, there may be many models that could explain the data almost equally well. However, most learning algorithms return only one of these models, leaving practitioners with no practical way to explore alternative models that might have desirable properties beyond what could be expressed within a loss function. The Rashomon set is the set of these all almost-optimal models. Rashomon sets can be extremely complicated, particularly for highly nonlinear function classes that allow complex interaction terms, such as decision trees. We provide the first technique for completely enumerating the Rashomon set for sparse decision trees; in fact, our work provides the first complete enumeration of any Rashomon set for a non-trivial problem with a highly nonlinear discrete function class. This allows the user an unprecedented level of control over model choice among all models that are approximately equally good. We represent the Rashomon set in a specialized data structure that supports efficient querying and sampling. We show three applications of the Rashomon set: 1) it can be used to study variable importance for the set of almost-optimal trees (as opposed to a single tree), 2) the Rashomon set for accuracy enables enumeration of the Rashomon sets for balanced accuracy and F1-score, and 3) the Rashomon set for a full dataset can be used to produce Rashomon sets constructed with only subsets of the data set. Thus, we are able to examine Rashomon sets across problems with a new lens, enabling users to choose models rather than be at the mercy of an algorithm that produces only a single model.
Author Information
Rui Xin (Duke University)
Chudi Zhong (Duke University)
Zhi Chen (Duke University)
Takuya Takagi (Fujitsu Ltd.)
Margo Seltzer (University of British Columbia)
**MARGO I. SELTZER** is Canada 150 Research Chair in Computer Systems and the Cheriton Family chair in Computer Science at the University of British Columbia. Her research interests are in systems, construed quite broadly: systems for capturing and accessing data provenance, file systems, databases, transaction processing systems, storage and analysis of graph-structured data, new architectures for parallelizing execution, and systems that apply technology to problems in healthcare. She is the author of several widely-used software packages including database and transaction libraries and the 4.4BSD log-structured file system. Dr. Seltzer was a co-founder and CTO of Sleepycat Software, the makers of Berkeley DB, recipient of the 2020 ACM SIGMOD Systems Award. She serves on Advisory Council for the Canadian COVID alert app and the Computer Science and Telecommunications Board (CSTB) of the (US) National Academies. She is a past President of the USENIX Assocation and served as the USENIX representative to the Computing Research Association Board of Directors and on the Computing Community Consortium. She is a member of the National Academy of Engineering, the American Academy of Arts and Sciences, a Sloan Foundation Fellow in Computer Science, an ACM Fellow, a Bunting Fellow, and was the recipient of the 1996 Radcliffe Junior Faculty Fellowship. She is recognized as an outstanding teacher and mentor, having received the Phi Beta Kappa teaching award in 1996, the Abrahmson Teaching Award in 1999, the Capers and Marion McDonald Award for Excellence in Mentoring and Advising in 2010, and the CRA-E Undergraduate Research Mentoring Award in 2017. Professor Seltzer received an A.B. degree in Applied Mathematics from Harvard/Radcliffe College and a Ph. D. in Computer Science from the University of California, Berkeley.
Cynthia Rudin (Duke)
More from the Same Authors
-
2022 : Anomaly Detection in Multiplex Dynamic Networks: from Blockchain Security to Brain Disease Prediction »
Ali Behrouz · Margo Seltzer -
2022 : Making the World More Equal, One Ride at a Time: Studying Public Transportation Initiatives Using Interpretable Causal Inference »
Gaurav Rajesh Parikh · Albert Sun · Jenny Huang · Lesia Semenova · Cynthia Rudin -
2023 Poster: This Looks Like Those: Illuminating Prototypical Concepts Using Multiple Visualizations »
Chiyu Ma · Brandon Zhao · Chaofan Chen · Cynthia Rudin -
2023 Poster: A Path to Simpler Models Starts With Noise »
Lesia Semenova · Harry Chen · Ronald Parr · Cynthia Rudin -
2023 Poster: The Rashomon Importance Distribution: Getting RID of Unstable, Single Model-based Variable Importance »
Jon Donnelly · Srikar Katta · Cynthia Rudin · Edward Browne -
2023 Poster: CAT-Walk: Inductive Hypergraph Learning via Set Walks »
Ali Behrouz · Farnoosh Hashemi · Sadaf Sadeghian · Margo Seltzer -
2023 Poster: Exploring and Interacting with the Set of Good Sparse Generalized Additive Models »
Zhi Chen · Chudi Zhong · Margo Seltzer · Cynthia Rudin -
2023 Poster: OKRidge: Scalable Optimal k-Sparse Ridge Regression for Learning Dynamical Systems »
Jiachang Liu · Sam Rosen · Chudi Zhong · Cynthia Rudin -
2022 Panel: Panel 3A-2: Linear tree shap… & Exploring the Whole… »
peng yu · Cynthia Rudin -
2022 : Spotlight: Anomaly Detection in Multiplex Dynamic Networks: from Blockchain Security to Brain Disease Prediction »
Ali Behrouz · Margo Seltzer -
2022 : Panel Discussion »
Cynthia Rudin · Dan Bohus · Brenna Argall · Alison Gopnik · Igor Mordatch · Samuel Kaski -
2022 : Let’s Give Domain Experts a Choice by Creating Many Approximately-Optimal Machine Learning Models »
Cynthia Rudin -
2022 Poster: Rethinking Nonlinear Instrumental Variable Models through Prediction Validity »
Chunxiao Li · Cynthia Rudin · Tyler H. McCormick -
2022 Poster: FasterRisk: Fast and Accurate Interpretable Risk Scores »
Jiachang Liu · Chudi Zhong · Boxuan Li · Margo Seltzer · Cynthia Rudin -
2021 : AME: Interpretable Almost Exact Matching for Causal Inference »
Haoning Jiang · Thomas Howell · Neha Gupta · Vittorio Orlandi · Sudeepa Roy · Marco Morucci · Harsh Parikh · Alexander Volfovsky · Cynthia Rudin -
2020 : Contributed Talk - Cryo-ZSSR: multiple-image super-resolution based on deep internal learning »
Qinwen Huang · Reed Chen · Cynthia Rudin -
2020 Workshop: Self-Supervised Learning -- Theory and Practice »
Pengtao Xie · Shanghang Zhang · Pulkit Agrawal · Ishan Misra · Cynthia Rudin · Abdelrahman Mohamed · Wenzhen Yuan · Barret Zoph · Laurens van der Maaten · Xingyi Yang · Eric Xing -
2020 : How should researchers engage with controversial applications of AI? »
Logan Koepke · CATHERINE ONEIL · Tawana Petty · Cynthia Rudin · Deborah Raji · Shawn Bushway -
2020 Workshop: Fair AI in Finance »
Senthil Kumar · Cynthia Rudin · John Paisley · Isabelle Moulinier · C. Bayan Bruss · Eren K. · Susan Tibbs · Oluwatobi Olabiyi · Simona Gandrabur · Svitlana Vyetrenko · Kevin Compher -
2019 Poster: This Looks Like That: Deep Learning for Interpretable Image Recognition »
Chaofan Chen · Oscar Li · Daniel Tao · Alina Barnett · Cynthia Rudin · Jonathan K Su -
2019 Spotlight: This Looks Like That: Deep Learning for Interpretable Image Recognition »
Chaofan Chen · Oscar Li · Daniel Tao · Alina Barnett · Cynthia Rudin · Jonathan K Su -
2019 Poster: Optimal Sparse Decision Trees »
Xiyang Hu · Cynthia Rudin · Margo Seltzer -
2019 Spotlight: Optimal Sparse Decision Trees »
Xiyang Hu · Cynthia Rudin · Margo Seltzer -
2018 : Invited Talk 6: Is it possible to have interpretable models for AI in Finance? »
Cynthia Rudin -
2018 : Poster Session 1 (note there are numerous missing names here, all papers appear in all poster sessions) »
Akhilesh Gotmare · Kenneth Holstein · Jan Brabec · Michal Uricar · Kaleigh Clary · Cynthia Rudin · Sam Witty · Andrew Ross · Shayne O'Brien · Babak Esmaeili · Jessica Forde · Massimo Caccia · Ali Emami · Scott Jordan · Bronwyn Woods · D. Sculley · Rebekah Overdorf · Nicolas Le Roux · Peter Henderson · Brandon Yang · Tzu-Yu Liu · David Jensen · Niccolo Dalmasso · Weitang Liu · Paul Marc TRICHELAIR · Jun Ki Lee · Akanksha Atrey · Matt Groh · Yotam Hechtlinger · Emma Tosch