Timezone: »

 
Workshop
Workshop on Machine Learning Safety
Jacob Steinhardt · Victoria Krakovna · Dan Hendrycks · Nicholas Carlini · Dawn Song

Fri Dec 09 07:00 AM -- 02:00 PM (PST) @ Virtual
Event URL: https://neurips2022.mlsafety.org »

Designing systems to operate safely in real-world settings is a topic of growing interest in machine learning. As ML becomes more capable and widespread, long-term and long-tail safety risks will grow in importance. To make the adoption of ML more beneficial, various aspects of safety engineering and oversight need to be proactively addressed by the research community. This workshop will bring together researchers from machine learning communities to focus on research topics in Robustness, Monitoring, Alignment, and Systemic Safety.
* Robustness is designing systems to be reliable in the face of adversaries and highly unusual situations.
* Monitoring is detecting anomalies, malicious use, and discovering unintended model functionality.
* Alignment is building models that represent and safely optimize difficult-to-specify human values.
* Systemic Safety is using ML to address broader risks related to how ML systems are handled, such as cyberattacks, facilitating cooperation, or improving the decision-making of public servants.

Author Information

Jacob Steinhardt (UC Berkeley)
Victoria Krakovna (DeepMind)
Dan Hendrycks (Center for AI Safety)
Nicholas Carlini (Google)
Dawn Song (UC Berkeley)

More from the Same Authors