`

Timezone: »

 
Poster
AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network Inference
Qian Lou · Song Bian · Lei Jiang

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #857
Hybrid Privacy-Preserving Neural Network (HPPNN) implementing linear layers by Homomorphic Encryption (HE) and nonlinear layers by Garbled Circuit (GC) is one of the most promising secure solutions to emerging Machine Learning as a Service (MLaaS). Unfortunately, a HPPNN suffers from long inference latency, e.g., $\sim100$ seconds per image, which makes MLaaS unsatisfactory. Because HE-based linear layers of a HPPNN cost $93\%$ inference latency, it is critical to select a set of HE parameters to minimize computational overhead of linear layers. Prior HPPNNs over-pessimistically select huge HE parameters to maintain large noise budgets, since they use the same set of HE parameters for an entire network and ignore the error tolerance capability of a network. In this paper, for fast and accurate secure neural network inference, we propose an automated layer-wise parameter selector, AutoPrivacy, that leverages deep reinforcement learning to automatically determine a set of HE parameters for each linear layer in a HPPNN. The learning-based HE parameter selection policy outperforms conventional rule-based HE parameter selection policy. Compared to prior HPPNNs, AutoPrivacy-optimized HPPNNs reduce inference latency by $53\%\sim70\%$ with negligible loss of accuracy.

Author Information

Qian Lou (Indiana University Bloomington)

I am a third-year Ph.D. student at Indiana University Bloomington.

Song Bian (Kyoto University)
Lei Jiang (Indiana University Bloomington)

More from the Same Authors