Adversarialism in Machine Learning and the Law

Rediet Abebe · Moritz Hardt · Angela Jin · Ludwig Schmidt · Mírian Silva · Tainá Turella · Rebecca Wexler



As the applications of machine learning have increased over the past decade, so have the associated performance expectations. With machine learning now being deployed in safety- and security-critical areas such as autonomous vehicles and healthcare, the research community has begun to scrutinize the generalization capabilities of current machine learning models closely. One prominent research thread in reliable machine learning is taking an explicitly adversarial perspective, e.g., the widely-studied phenomenon of adversarial examples in computer vision and other domains. Researchers have proposed a multitude of attacks on trained models and new training algorithms to increase the robustness of such attacks. In this workshop, we connect this burgeoning research field to an important application area that offers a clear motivation for an explicitly adversarial perspective: the U.S. criminal legal system.

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles »


Log in and register to view live content