Timezone: »

Poster
Make Sure You're Unsure: A Framework for Verifying Probabilistic Specifications
Leonard Berrada · Sumanth Dathathri · Krishnamurthy Dvijotham · Robert Stanforth · Rudy Bunel · Jonathan Uesato · Sven Gowal · M. Pawan Kumar

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @
Most real world applications require dealing with stochasticity like sensor noise or predictive uncertainty, where formal specifications of desired behavior are inherently probabilistic. Despite the promise of formal verification in ensuring the reliability of neural networks, progress in the direction of probabilistic specifications has been limited. In this direction, we first introduce a general formulation of probabilistic specifications for neural networks, which captures both probabilistic networks (e.g., Bayesian neural networks, MC-Dropout networks) and uncertain inputs (distributions over inputs arising from sensor noise or other perturbations). We then propose a general technique to verify such specifications by generalizing the notion of Lagrangian duality, replacing standard Lagrangian multipliers with "functional multipliers" that can be arbitrary functions of the activations at a given layer. We show that an optimal choice of functional multipliers leads to exact verification (i.e., sound and complete verification), and for specific forms of multipliers, we develop tractable practical verification algorithms. We empirically validate our algorithms by applying them to Bayesian Neural Networks (BNNs) and MC Dropout Networks, and certifying properties such as adversarial robustness and robust detection of out-of-distribution (OOD) data. On these tasks we are able to provide significantly stronger guarantees when compared to prior work -- for instance, for a VGG-64 MC-Dropout CNN trained on CIFAR-10 in a verification-agnostic manner, we improve the certified AUC (a verified lower bound on the true AUC) for robust OOD detection (on CIFAR-100) from $0 \% \rightarrow 29\%$. Similarly, for a BNN trained on MNIST, we improve on the $\ell_\infty$ robust accuracy from $60.2 \% \rightarrow 74.6\%$. Further, on a novel specification -- distributionally robust OOD detection -- we improve on the certified AUC from $5\% \rightarrow 23\%$.

Author Information

Krishnamurthy Dvijotham (DeepMind)

Krishnamurthy Dvijotham is a research scientist at Google Deepmind. Until recently, he was a research engineer at Pacific Northwest National Laboratory (PNNL) in the optimization and control group. He was previously a postdoctoral fellow at the Center for Mathematics of Information at Caltech. He received his PhD in computer science and engineering from the University of Washington, Seattle in 2014 and a bachelors from IIT Bombay in 2008. His research interests span stochastic control theory, artificial intelligence, machine learning and markets/economics, and his work is motivated primarily by problems arising in large-scale infrastructure systems like the power grid. His research has won awards at several conferences in optimization, AI and machine learning.