Causal Modeling, Neuroscience, Genomics
Abstract
Improving diabetes outcomes at scale requires moving beyond quarterly clinic visits to AI-guided, continuous care grounded in wearable data and robust causal reasoning. Yet when training black-box models on observational data, high predictive accuracy can coexist with low causal validity—the counterfactual simulation of a glucose trajectory may respond nonsensically to a simulated insulin intervention. We address this by encoding domain knowledge about treatment-effect rankings into a causal loss that, combined with standard predictive loss, biases learning toward physiologically plausible models. I’ll then turn to detecting and localizing treatment effects in high-dimensional outcome spaces, such as week-long CGM traces. Finally, I’ll describe a pipeline for learning explainable treatment policies for remote patient monitoring, where clinician-informed state and action representations yield targeting policies that are more effective and more interpretable than black-box alternatives. Together, these pieces show how causally grounded modeling, high-dimensional treatment-effect inference, and interpretable policy learning can work in concert to support trustworthy AI-guided clinical care.