Timezone: »

Photonic Differential Privacy with Direct Feedback Alignment
Ruben Ohana · Hamlet Medina · Julien Launay · Alessandro Cappelli · Iacopo Poli · Liva Ralaivola · Alain Rakotomamonjy

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @

Optical Processing Units (OPUs) -- low-power photonic chips dedicated to large scale random projections -- have been used in previous work to train deep neural networks using Direct Feedback Alignment (DFA), an effective alternative to backpropagation. Here, we demonstrate how to leverage the intrinsic noise of optical random projections to build a differentially private DFA mechanism, making OPUs a solution of choice to provide a \emph{private-by-design} training. We provide a theoretical analysis of our adaptive privacy mechanism, carefully measuring how the noise of optical random projections propagates in the process and gives rise to provable Differential Privacy. Finally, we conduct experiments demonstrating the ability of our learning procedure to achieve solid end-task performance.

Author Information

Ruben Ohana (Ecole Normale Supérieure & LightOn)
Hamlet Medina (Criteo)
Julien Launay (École Normale Supérieure)
Alessandro Cappelli (Lighton)
Iacopo Poli (LightOn)
Liva Ralaivola (LIF, IUF, Aix-Marseille University, CNRS)
Alain Rakotomamonjy (Université de Rouen Normandie Criteo AI Lab)

More from the Same Authors