Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Challenges in Deploying and Monitoring Machine Learning Systems

A Human-Centric Take on Model Monitoring

Murtuza Shergadwala · Himabindu Lakkaraju · Krishnaram Kenthapadi


Abstract:

Predictive models are increasingly used for consequential decisions in high-stakes domains such as healthcare, finance, and policy. It becomes critical to ensure that these models make accurate predictions, are robust to shifts in the data, do not rely on spurious features, and do not unduly discriminate against minority groups. To this end, several approaches spanning various areas such as explainability, fairness, and robustness have been proposed in recent literature. Such approaches need to be human-centered as they cater to the understanding of the models to their users. However, there is a research gap in understanding the human-centric needs and challenges of monitoring machine learning (ML) models once they are deployed. To fill this gap, we conducted an interview study with 13 practitioners who have experience at the intersection of deploying ML models and engaging with customers spanning domains such as financial services, healthcare, hiring, online retail, computational advertising, and conversational assistants. We identified various human-centric challenges and requirements for model monitoring in real-world applications. Specifically, we found the need and the challenge for the model monitoring systems to clarify the impact of the monitoring observations on outcomes. Further, such insights must be actionable, customizable for domain-specific use cases, and cognitively considerate to avoid information overload.

Chat is not available.