Skip to yearly menu bar Skip to main content


Poster

[Re] Privacy-preserving collaborative learning with automatic transformation search

Alfonso Taboada Warmerdam · Lodewijk Loerakker · Lucas Meijer · Ole Nissen

Keywords: [ ReScience - MLRC 2021 ] [ Journal Track ]


Abstract:

Scope of Reproduciblity Gao et al. propose to leverage policies consisting of a series of data augmentations for preventing the possibility of reconstruction attacks on the training data of gradients. The goal of this study is to: (1) Verify the findings of the authors about the performance of the found policies and the correlation between the reconstruction metric and provided protection. (2) Explore if the defence generalizes to an attacker that has knowledge about the policy used. Methodology For the experiments conducted in this research, parts of the code from Gao et al, were refactored to allow for more clear and robust experimentation. Approximately a week of computation time is needed for our experiments on a 1080 Ti GPU. Results It was possible to verify the results from the original paper within a reasonable margin of error. However, the reproduced results show that the claimed protection does not generalize to an attacker that has knowledge over the augmentations used. Additionally, the results show that the optimal augmentations are often predictable since the policies found by the proposed search algorithm mostly consist of the augmentations that perform best individually. What was easy The design of the search algorithm allowed for easy iterations of experiments since obtaining the metrics of a single policy can be done in under a minute on an average GPU. It was helpfull that the authors provided the code of their experiments. What was difficult To obtain the reconstruction score and accuracy of a policy, the architecture needs to be trained for about 10 GPU-hours. This makes it difficult to verify how well the search metrics correlate with these scores. It also prevented us to test the random policy baseline, as this requires the training to be repeated at least 10 times which requires significant computational power. Communication with original authors An e-mail was sent to the original authors regarding the differences in results. Unfortunately no response has been received so far.

Chat is not available.