`

Timezone: »

 
Poster
Glyph: Fast and Accurately Training Deep Neural Networks on Encrypted Data
Qian Lou · Bo Feng · Geoffrey Charles Fox · Lei Jiang

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #1079

Because of the lack of expertise, to gain benefits from their data, average users have to upload their private data to cloud servers they may not trust. Due to legal or privacy constraints, most users are willing to contribute only their encrypted data, and lack interests or resources to join deep neural network (DNN) training in cloud. To train a DNN on encrypted data in a completely non-interactive way, a recent work proposes a fully homomorphic encryption (FHE)-based technique implementing all activations by \textit{Brakerski-Gentry-Vaikuntanathan} (BGV)-based lookup tables. However, such inefficient lookup-table-based activations significantly prolong private training latency of DNNs.

In this paper, we propose, Glyph, an FHE-based technique to fast and accurately train DNNs on encrypted data by switching between TFHE (Fast Fully Homomorphic Encryption over the Torus) and BGV cryptosystems. Glyph uses logic-operation-friendly TFHE to implement nonlinear activations, while adopts vectorial-arithmetic-friendly BGV to perform multiply-accumulations (MACs). Glyph further applies transfer learning on DNN training to improve test accuracy and reduce the number of MACs between ciphertext and ciphertext in convolutional layers. Our experimental results show Glyph obtains state-of-the-art accuracy, and reduces training latency by 69%~99% over prior FHE-based privacy-preserving techniques on encrypted datasets.

Author Information

Qian Lou (Indiana University Bloomington)

I am a third-year Ph.D. student at Indiana University Bloomington.

Bo Feng (Indiana university)
Geoffrey Charles Fox (Indiana University)
Lei Jiang (Indiana University Bloomington)

More from the Same Authors