Talk
in
Workshop: Transparent and interpretable Machine Learning in Safety Critical Environments
Contributed talk: Beyond Sparsity: Tree-based Regularization of Deep Models for Interpretability
Mike Wu · Sonali Parbhoo · Finale Doshi-Velez
[
Abstract
]
Abstract:
The lack of interpretability remains a key barrier to the adoption of deep models in many healthcare applications. In this work, we explicitly regularize deep models so human users might step through the process behind their predictions in little time. Specifically, we train deep time-series models so their class-probability predictions have high accuracy while being closely modeled by decision trees with few nodes. On two clinical decision-making tasks, we demonstrate that this new tree-based regularization is distinct from simpler L2 or L1 penalties, resulting in more interpretable models without sacrificing predictive power.
Live content is unavailable. Log in and register to view live content