Timezone: »

Trustworthy Federated Learning
Bo Li

Fri Dec 02 06:35 AM -- 06:53 AM (PST) @

Advances in machine learning have led to rapid and widespread deployment of learning-based inference and decision-making for safety-critical applications, such as autonomous driving and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in inference time through poisoning attacks, especially in the distributed learning setting. In this talk, I will describe my recent research about security and privacy problems in federated learning, with a focus on potential certifiable defense approaches, differentially private federated learning, and fairness in FL. We will also discuss other defense principles towards developing practical robust learning systems with trustworthiness guarantees.

Author Information

Bo Li (UIUC)

More from the Same Authors