Timezone: »
Machine learning has successfully framed many sequential decision making problems as either supervised prediction, or optimal decision-making policy identification via reinforcement learning. In data-constrained offline settings, both approaches may fail as they assume fully optimal behavior or rely on exploring alternatives that may not exist. We introduce an inherently different approach that identifies "dead-ends" of a state space. We focus on patient condition in the intensive care unit, where a "medical dead-end" indicates that a patient will expire, regardless of all potential future treatment sequences. We postulate "treatment security" as avoiding treatments with probability proportional to their chance of leading to dead-ends, present a formal proof, and frame discovery as an RL problem. We then train three independent deep neural models for automated state construction, dead-end discovery and confirmation. Our empirical results discover that dead-ends exist in real clinical data among septic patients, and further reveal gaps between secure treatments and those administered.
Author Information
Mehdi Fatemi (Microsoft Research)
Taylor Killian (University of Toronto Vector Institute)
Jayakumar Subramanian (Adobe Systems)
Marzyeh Ghassemi (University of Toronto, Vector Institute)
More from the Same Authors
-
2021 : Status-quo policy gradient in Multi-Agent Reinforcement Learning »
Pinkesh Badjatiya · Mausoom Sarkar · Nikaash Puri · Jayakumar Subramanian · Abhishek Sinha · Siddharth Singh · Balaji Krishnamurthy -
2022 : Trajectory-based Explainability Framework for Offline RL »
Shripad Deshmukh · Arpan Dasgupta · Chirag Agarwal · Nan Jiang · Balaji Krishnamurthy · Georgios Theocharous · Jayakumar Subramanian -
2022 : Dissecting In-the-Wild Stress from Multimodal Sensor Data »
Sujay Nagaraj · Thomas Hartvigsen · Adrian Boch · Luca Foschini · Marzyeh Ghassemi · Sarah Goodday · Stephen Friend · Anna Goldenberg -
2021 Poster: Learning Optimal Predictive Checklists »
Haoran Zhang · Quaid Morris · Berk Ustun · Marzyeh Ghassemi -
2021 Poster: Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning »
Timo Milbich · Karsten Roth · Samarth Sinha · Ludwig Schmidt · Marzyeh Ghassemi · Bjorn Ommer -
2020 : Policy Panel »
Roya Pakzad · Dia Kayyali · Marzyeh Ghassemi · Shakir Mohamed · Mohammad Norouzi · Ted Pedersen · Anver Emon · Abubakar Abid · Darren Byler · Samhaa R. El-Beltagy · Nayel Shafei · Mona Diab -
2020 : Welcome »
Marzyeh Ghassemi -
2019 Poster: The Cells Out of Sample (COOS) dataset and benchmarks for measuring out-of-sample generalization of image classifiers »
Alex Lu · Amy Lu · Wiebke Schormann · Marzyeh Ghassemi · David Andrews · Alan Moses -
2019 Poster: Using a Logarithmic Mapping to Enable Lower Discount Factors in Reinforcement Learning »
Harm Van Seijen · Mehdi Fatemi · Arash Tavakoli -
2019 Oral: Using a Logarithmic Mapping to Enable Lower Discount Factors in Reinforcement Learning »
Harm Van Seijen · Mehdi Fatemi · Arash Tavakoli -
2017 Poster: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2017 Oral: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2017 Poster: Hybrid Reward Architecture for Reinforcement Learning »
Harm Van Seijen · Mehdi Fatemi · Romain Laroche · Joshua Romoff · Tavian Barnes · Jeffrey Tsang