Timezone: »
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks. However, their accuracy comes at the cost of intelligibility: it is usually unclear how they make their decisions. This hinders their applicability to high stakes decision-making domains such as healthcare. We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models. NAMs learn a linear combination of neural networks that each attend to a single input feature. These networks are trained jointly and can learn arbitrarily complex relationships between their input feature and the output. Our experiments on regression and classification datasets show that NAMs are more accurate than widely used intelligible models such as logistic regression and shallow decision trees. They perform similarly to existing state-of-the-art generalized additive models in accuracy, but are more flexible because they are based on neural nets instead of boosted trees. To demonstrate this, we show how NAMs can be used for multitask learning on synthetic data and on the COMPAS recidivism data due to their composability, and demonstrate that the differentiability of NAMs allows them to train more complex interpretable models for COVID-19.
Author Information
Rishabh Agarwal (Google Research, Brain Team)
Levi Melnick (Microsoft)
Nicholas Frosst (Google)
Xuezhou Zhang (Princeton)
Ben Lengerich (Carnegie Mellon University)
Rich Caruana (Microsoft)
Geoffrey Hinton (Google)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Neural Additive Models: Interpretable Machine Learning with Neural Nets »
Wed. Dec 8th 08:30 -- 10:00 AM Room
More from the Same Authors
-
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 : Behavior Predictive Representations for Generalization in Reinforcement Learning »
Siddhant Agarwal · Aaron Courville · Rishabh Agarwal -
2021 : Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments »
Amin Rakhsha · Xuezhou Zhang · Jerry Zhu · Adish Singla -
2021 : Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments »
Amin Rakhsha · Xuezhou Zhang · Jerry Zhu · Adish Singla -
2022 Poster: Reincarnating Reinforcement Learning: Reusing Prior Computation to Accelerate Progress »
Rishabh Agarwal · Max Schwarzer · Pablo Samuel Castro · Aaron Courville · Marc Bellemare -
2022 Poster: A Unified Sequence Interface for Vision Tasks »
Ting Chen · Saurabh Saxena · Lala Li · Tsung-Yi Lin · David Fleet · Geoffrey Hinton -
2021 : Representation Learning for Online and Offline RL in Low-rank MDPs »
Masatoshi Uehara · Xuezhou Zhang · Wen Sun -
2021 : Representation Learning for Online and Offline RL in Low-rank MDPs »
Masatoshi Uehara · Xuezhou Zhang · Wen Sun -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization Q&A »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 Poster: Canonical Capsules: Self-Supervised Capsules in Canonical Pose »
Weiwei Sun · Andrea Tagliasacchi · Boyang Deng · Sara Sabour · Soroosh Yazdani · Geoffrey Hinton · Kwang Moo Yi -
2021 Oral: Deep Reinforcement Learning at the Edge of the Statistical Precipice »
Rishabh Agarwal · Max Schwarzer · Pablo Samuel Castro · Aaron Courville · Marc Bellemare -
2021 Poster: Deep Reinforcement Learning at the Edge of the Statistical Precipice »
Rishabh Agarwal · Max Schwarzer · Pablo Samuel Castro · Aaron Courville · Marc Bellemare -
2020 Poster: Task-agnostic Exploration in Reinforcement Learning »
Xuezhou Zhang · Yuzhe Ma · Adish Singla -
2019 : Poster Session 2 »
Mayur Saxena · Nicholas Frosst · Vivien Cabannes · Gene Kogan · Austin Dill · Anurag Sarkar · Joel Ruben Antony Moniz · Vibert Thio · Scott Sievert · Lia Coleman · Frederik De Bleser · Brian Quanz · Jonathon Kereliuk · Panos Achlioptas · Mohamed Elhoseiny · Songwei Ge · Aidan Gomez · Jamie Brew -
2019 : Cell »
Anne Carpenter · Jian Zhou · Maria Chikina · Alexander Tong · Ben Lengerich · Aly Abdelkareem · Gokcen Eraslan · Stephen Ra · Daniel Burkhardt · Frederick A Matsen IV · Alan Moses · Zhenghao Chen · Marzieh Haghighi · Alex Lu · Geoffrey Schau · Jeff Nivala · Miriam Shiffman · Hannes Harbrecht · Levi Masengo Wa Umba · Joshua Weinstein -
2019 Poster: Policy Poisoning in Batch Reinforcement Learning and Control »
Yuzhe Ma · Xuezhou Zhang · Wen Sun · Jerry Zhu -
2019 Poster: Learning Sample-Specific Models with Low-Rank Personalized Regression »
Ben Lengerich · Bryon Aragam · Eric Xing -
2018 : Accepted papers »
Sven Gowal · Bogdan Kulynych · Marius Mosbach · Nicholas Frosst · Phil Roth · Utku Ozbulak · Simral Chaudhary · Toshiki Shibahara · Salome Viljoen · Nikita Samarin · Briland Hitaj · Rohan Taori · Emanuel Moss · Melody Guan · Lukas Schott · Angus Galloway · Anna Golubeva · Xiaomeng Jin · Felix Kreuk · Akshayvarun Subramanya · Vipin Pillai · Hamed Pirsiavash · Giuseppe Ateniese · Ankita Kalra · Logan Engstrom · Anish Athalye -
2017 Poster: Dynamic Routing Between Capsules »
Sara Sabour · Nicholas Frosst · Geoffrey E Hinton -
2017 Spotlight: Dynamic Routing Between Capsules »
Sara Sabour · Nicholas Frosst · Geoffrey E Hinton -
2015 Poster: Grammar as a Foreign Language »
Oriol Vinyals · Ćukasz Kaiser · Terry Koo · Slav Petrov · Ilya Sutskever · Geoffrey Hinton