Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Tue Dec 04 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #158
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Jamie Hayes · Olga Ohrimenko
[ Paper

Machine learning is data hungry; the more data a model has access to in training, the more likely it is to perform well at inference time. Distinct parties may want to combine their local data to gain the benefits of a model trained on a large corpus of data. We consider such a case: parties get access to the model trained on their joint data but do not see each others individual datasets. We show that one needs to be careful when using this multi-party model since a potentially malicious party can taint the model by providing contaminated data. We then show how adversarial training can defend against such attacks by preventing the model from learning trends specific to individual parties data, thereby also guaranteeing party-level membership privacy.