Spotlight
in
Workshop: Transparent and interpretable Machine Learning in Safety Critical Environments
Poster spotlights
Hiroshi Kuwajima · Masayuki Tanaka · Qingkai Liang · Matthieu Komorowski · Fanyu Que · Thalita F Drumond · Aniruddh Raghu · Leo Anthony Celi · Christina Göpfert · Andrew Ross · Sarah Tan · Rich Caruana · Yin Lou · Devinder Kumar · Graham Taylor · Forough Poursabzi-Sangdeh · Jennifer Wortman Vaughan · Hanna Wallach
[1] "Network Analysis for Explanation" [2] "Using prototypes to improve convolutional networks interpretability" [3] "Accelerated Primal-Dual Policy Optimization for Safe Reinforcement Learning" [4] "Deep Reinforcement Learning for Sepsis Treatment" [5] "Analyzing Feature Relevance for Linear Reject Option SVM using Relevance Intervals" [6] "The Neural LASSO: Local Linear Sparsity for Interpretable Explanations" [7] "Detecting Bias in Black-Box Models Using Transparent Model Distillation" [8] "Data masking for privacy-sensitive learning" [9] "CLEAR-DR: Interpretable Computer Aided Diagnosis of Diabetic Retinopathy" [10] "Manipulating and Measuring Model Interpretability"
Live content is unavailable. Log in and register to view live content