Program Highlights »
Fri Dec 9th 08:00 AM -- 06:30 PM @ Room 113
Reliable Machine Learning in the Wild
Dylan Hadfield-Menell · Adrian Weller · David Duvenaud · Jacob Steinhardt · Percy S Liang

Workshop Home Page

When will a system that has performed well in the past continue to do so in the future? How do we design such systems in the presence of novel and potentially adversarial input distributions? What techniques will let us safely build and deploy autonomous systems on a scale where human monitoring becomes difficult or infeasible? Answering these questions is critical to guaranteeing the safety of emerging high stakes applications of AI, such as self-driving cars and automated surgical assistants. This workshop will bring together researchers in areas such as human-robot interaction, security, causal inference, and multi-agent systems in order to strengthen the field of reliability engineering for machine learning systems. We are interested in approaches that have the potential to provide assurances of reliability, especially as systems scale in autonomy and complexity. We will focus on four aspects — robustness (to adversaries, distributional shift, model mis-specification, corrupted data); awareness (of when a change has occurred, when the model might be mis-calibrated, etc.); adaptation (to new situations or objectives); and monitoring (allowing humans to meaningfully track the state of the system). Together, these will aid us in designing and deploying reliable machine learning systems.

08:40 AM Opening Remarks
Jacob Steinhardt
09:00 AM Rules for Reliable Machine Learning
Martin A Zinkevich
09:30 AM What's your ML Test Score? A rubric for ML production systems
D. Sculley
09:45 AM Poster Spotlights I
10:30 AM Robust Learning and Inference
Yishay Mansour
11:00 AM Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition
Jennifer Hill
11:30 AM Robust Covariate Shift Classification Using Multiple Feature Views
Angie Liu
11:45 AM Poster Spotlights II
01:15 PM Doug Tygar
Doug Tygar
01:45 PM Adversarial Examples and Adversarial Training
Ian Goodfellow
02:15 PM Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning
Octavian Suciu
02:30 PM Poster Spotlights III
02:45 PM Poster Session
03:30 PM Learning Reliable Objectives
Anca Dragan
04:00 PM Building and Validating the AI behind the Next-Generation Aircraft Collision Avoidance System
Mykel J Kochenderfer
04:30 PM Online Prediction with Selfish Experts
Okke Schrijvers
04:45 PM TensorFlow Debugger: Debugging Dataflow Graphs for Machine Learning
D. Sculley
05:00 PM What are the challenges to making machine learning reliable in practice?