Skip to yearly menu bar Skip to main content


Tutorial

(Track2) Beyond Accuracy: Grounding Evaluation Metrics for Human-Machine Learning Systems

Praveen Chandar · Fernando Diaz · Brian St. Thomas


Abstract:

The evaluation and optimization of machine learning systems have largely adopted well-known performance metrics like accuracy (for classification) or squared error (for regression). While these metrics are reusable across a variety of machine learning tasks, they make strong assumptions often not observed when situated in a broader technical or sociotechnical system. This is especially true in systems that interact with large populations of humans attempting to complete a goal or satisfy a need (e.g. search, recommendation, game-playing). In this tutorial, we will present methods for developing evaluation metrics grounded in what users expect of the system and how they respond to system decisions. The goal of this tutorial is both to share methods for designing user-based quantitative metrics and to motivate new research into optimizing for these more structured metrics.

Chat is not available.