Timezone: »

 
Workshop
Reliable Machine Learning in the Wild
Dylan Hadfield-Menell · Adrian Weller · David Duvenaud · Jacob Steinhardt · Percy Liang

Thu Dec 08 11:00 PM -- 09:30 AM (PST) @ Room 113
Event URL: https://sites.google.com/site/wildml2016nips/?pli=1 »

When will a system that has performed well in the past continue to do so in the future? How do we design such systems in the presence of novel and potentially adversarial input distributions? What techniques will let us safely build and deploy autonomous systems on a scale where human monitoring becomes difficult or infeasible? Answering these questions is critical to guaranteeing the safety of emerging high stakes applications of AI, such as self-driving cars and automated surgical assistants. This workshop will bring together researchers in areas such as human-robot interaction, security, causal inference, and multi-agent systems in order to strengthen the field of reliability engineering for machine learning systems. We are interested in approaches that have the potential to provide assurances of reliability, especially as systems scale in autonomy and complexity. We will focus on four aspects — robustness (to adversaries, distributional shift, model mis-specification, corrupted data); awareness (of when a change has occurred, when the model might be mis-calibrated, etc.); adaptation (to new situations or objectives); and monitoring (allowing humans to meaningfully track the state of the system). Together, these will aid us in designing and deploying reliable machine learning systems.

Thu 11:40 p.m. - 12:00 a.m. [iCal]
Opening Remarks (Talk)
Jacob Steinhardt
Fri 12:00 a.m. - 12:30 a.m. [iCal]
Rules for Reliable Machine Learning (Invited Talk)
Martin A Zinkevich
Fri 12:30 a.m. - 12:45 a.m. [iCal]
What's your ML Test Score? A rubric for ML production systems (Contributed Talk)
D. Sculley
Fri 12:45 a.m. - 1:00 a.m. [iCal]
Poster Spotlights I (Spotlight)
Fri 1:30 a.m. - 2:00 a.m. [iCal]

Robust inference is an extension of probabilistic inference, where some of the observations may be adversarially corrupted. We limit the adversarial corruption to a finite set of modification rules. We model robust inference as a zero-sum game between an adversary, who selects a modification rule, and a predictor, who wants to accurately predict the state of nature.

There are two variants of the model, one where the adversary needs to pick the modification rule in advance and one where the adversary can select the modification rule after observing the realized uncorrupted input. For both settings we derive efficient near optimal policy runs in polynomial time. Our efficient algorithms are based on methodologies for developing local computation algorithms.

We also consider a learning setting where the predictor receives a set of uncorrupted inputs and their classification. The predictor needs to select a hypothesis, from a known set of hypotheses, and is tested on inputs which the adversary corrupts. We show how to utilize an ERM oracle to derive a near optimal predictor strategy, namely, picking a hypothesis that minimizes the error on the corrupted test inputs.

Based on joint works with Uriel Feige, Aviad Rubinstein, Robert Schapira, Moshe Tennenholtz, Shai Vardi.

Yishay Mansour
Fri 2:00 a.m. - 2:30 a.m. [iCal]
Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition (Invited Talk)
Jennifer Hill
Fri 2:30 a.m. - 2:45 a.m. [iCal]
Robust Covariate Shift Classification Using Multiple Feature Views (Contributed Talk)
Angie Liu
Fri 2:45 a.m. - 3:00 a.m. [iCal]
Poster Spotlights II (Spotlight)
Fri 4:15 a.m. - 4:45 a.m. [iCal]
Doug Tygar (Invited Talk)
Doug Tygar
Fri 4:45 a.m. - 5:15 a.m. [iCal]
Adversarial Examples and Adversarial Training (Invited Talk)
Ian Goodfellow
Fri 5:15 a.m. - 5:30 a.m. [iCal]
Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning (Contributed Talk)
Octavian Suciu
Fri 5:30 a.m. - 5:45 a.m. [iCal]
Poster Spotlights III (Spotlight)
Fri 5:45 a.m. - 6:30 a.m. [iCal]
Poster Session
Fri 6:30 a.m. - 7:00 a.m. [iCal]
Learning Reliable Objectives (Invited Talk)
Anca Dragan
Fri 7:00 a.m. - 7:30 a.m. [iCal]
Building and Validating the AI behind the Next-Generation Aircraft Collision Avoidance System (Invited Talk)
Mykel J Kochenderfer
Fri 7:30 a.m. - 7:45 a.m. [iCal]
Online Prediction with Selfish Experts (Contributed Talk)
Okke Schrijvers
Fri 7:45 a.m. - 8:00 a.m. [iCal]
TensorFlow Debugger: Debugging Dataflow Graphs for Machine Learning (Contributed Talk)
D. Sculley
Fri 8:00 a.m. - 8:30 a.m. [iCal]
What are the challenges to making machine learning reliable in practice? (Panel Discussion)

Author Information

Dylan Hadfield-Menell (UC Berkeley)
Adrian Weller (University of Cambridge)

Adrian Weller is Programme Director for AI at The Alan Turing Institute, the UK national institute for data science and AI, where he is also a Turing Fellow leading work on safe and ethical AI. He is a Senior Research Fellow in Machine Learning at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he leads the project on Trust and Transparency. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. He serves on several boards including the Centre for Data Ethics and Innovation. Previously, Adrian held senior roles in finance.

David Duvenaud (University of Toronto)
Jacob Steinhardt (UC Berkeley)
Percy Liang (Stanford University)

More from the Same Authors