Timezone: »

SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud
Zahra Ghodsi · Tianyu Gu · Siddharth Garg

Mon Dec 04 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #67

Inference using deep neural networks is often outsourced to the cloud since it is a computationally demanding task.  However, this raises a fundamental issue of trust. How can a client be sure that the cloud has performed inference correctly? A lazy cloud provider might use a simpler but less accurate model to reduce its own computational load, or worse, maliciously modify the inference results sent to the client. We propose SafetyNets, a framework that enables an untrusted server (the cloud) to provide a client with a short mathematical proof of the correctness of inference tasks that they perform on behalf of the client. Specifically, SafetyNets develops and implements a specialized interactive proof (IP) protocol for verifiable execution of a class of deep neural networks, i.e., those that can be represented as arithmetic circuits. Our empirical results on three- and four-layer deep neural networks demonstrate the run-time costs of SafetyNets for both the client and server are low. SafetyNets detects any incorrect computations of the neural network by the untrusted server with high probability, while achieving state-of-the-art accuracy on the MNIST digit recognition (99.4%) and TIMIT speech recognition tasks (75.22%).

Author Information

Zahra Ghodsi (New York University)
Tianyu Gu (NYU)
Siddharth Garg (NYU)

More from the Same Authors