Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Fri Dec 13 08:00 AM -- 06:15 PM (PST) @ West 118 - 120
Workshop on Federated Learning for Data Privacy and Confidentiality
Lixin Fan · Jakub Konečný · Yang Liu · Brendan McMahan · Virginia Smith · Han Yu





Workshop Home Page

Overview

Privacy and security have become critical concerns in recent years, particularly as companies and organizations increasingly collect detailed information about their products and users. This information can enable machine learning methods that produce better products. However, it also has the potential to allow for misuse, especially when private data about individuals is involved. Recent research shows that privacy and utility do not necessarily need to be at odds, but can be addressed by careful design and analysis. The need for such research is reinforced by the recent introduction of new legal constraints, led by the European Union’s General Data Protection Regulation (GDPR), which is already inspiring novel legislative approaches around the world such as Cyber-security Law of the People’s Republic of China and The California Consumer Privacy Act of 2018.

An approach that has the potential to address a number of problems in this space is federated learning (FL). FL is an ML setting where many clients (e.g., mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g., service provider), while keeping the training data decentralized. Organizations and mobile devices have access to increasing amounts of sensitive data, with scrutiny of ML privacy and data handling practices increasing correspondingly. These trends have produced significant interest in FL, since it provides a viable path to state-of-the-art ML without the need for the centralized collection of training data – and the risks and responsibilities that come with such centralization. Nevertheless, significant challenges remain open in the FL setting, the solution of which will require novel techniques from multiple fields, as well as improved open-source tooling for both FL research and real-world deployment

This workshop aims to bring together academic researchers and industry practitioners with common interests in this domain. For industry participants, we intend to create a forum to communicate what kind of problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. Overall, the workshop will provide an opportunity to share the most recent and innovative work in FL, and discuss open problems and relevant approaches. The technical issues encouraged to be submitted include general computation based on decentralized data (i.e., not only machine learning), and how such computations can be combined with other research areas, such as differential privacy, secure multi-party computation, computational efficiency, coding theory, etc. Contributions in theory as well as applications are welcome, including proposals for novel system design. Work on fully-decentralized (peer-to-peer) learning will also be considered, as there is significant overlap in both interest and techniques with federated learning.

Call for Contributions
We welcome high quality submissions in the broad area of federated learning (FL). A few (non-exhaustive) topics of interest include:
. Optimization algorithms for FL, particularly communication-efficient algorithms tolerant of non-IID data
. Approaches that scale FL to larger models, including model and gradient compression techniques
. Novel applications of FL
. Theory for FL
. Approaches to enhancing the security and privacy of FL, including cryptographic techniques and differential privacy
. Bias and fairness in the FL setting
. Attacks on FL including model poisoning, and corresponding defenses
. Incentive mechanisms for FL
. Software and systems for FL
. Novel applications of techniques from other fields to the FL setting: information theory, multi-task learning, model-agnostic meta-learning, and etc.
. Work on fully-decentralized (peer-to-peer) learning will also be considered, as there is significant overlap in both interest and techniques with FL.

Submissions in the form of extended abstracts must be at most 4 pages long (not including references), be anonymized, and adhere to the NeurIPS 2019 format. Submissions will be accepted as contributed talks or poster presentations. The workshop will not have formal proceedings, but accepted papers will be posted on the workshop website.

We support reproducible research and will sponsor a prize to be given to the best contribution that provides code to reproduce their results.

Submission link: https://easychair.org/conferences/?conf=flneurips2019

Important Dates (2019)
Submission deadline: Sep 9
Author notification: Sep 30
Camera-Ready Papers Due: TBD
Workshop: Dec 13

Organizers:
Lixin Fan, WeBank
Jakub Konečný, Google
Yang Liu, WeBank
Brendan McMahan, Google
Virginia Smith, CMU
Han Yu, NTU

Invited Speakers:
Francoise Beaufays, Principal Researcher, Google
Shahrokh Daijavad, Distinguished Research, IBM
Dawn Song, Professor, University of California, Berkeley
Ameet Talwalkar, Assistant Professor, CMU; Chief Scientist, Determined AI
Max Welling, Professor, University of Amsterdam; VP Technologies, Qualcomm
Qiang Yang, Hong Kong University of Science and Technology, Hong Kong; Chief AI Officer, WeBank

FAQ
Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?
Yes, you may include additional supplementary material, but you should ensure that the main paper is self-contained, since looking at supplementary material is at the discretion of the reviewers. The supplementary material should also follow the same NeurIPS format as the paper and be limited to a reasonable amount (max 10 pages in addition to the main submission).
Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?
We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.
Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?
We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).
Can a paper be submitted to the workshop that is currently under review or will be under review at a conference during the review phase?
It is fine to submit a condensed version (i.e., 4 pages) of a parallel conference submission, if it also fine for the conference in question. Our workshop does not have archival proceedings, and therefore parallel submissions of extended versions to other conferences are acceptable.

=====================================================
Accepted papers:
1. Paul Pu Liang, Terrance Liu, Liu Ziyin, Russ Salakhutdinov and Louis-Philippe Morency. Think Locally, Act Globally: Federated Learning with Local and Global Representations

2. Xin Yao, Tianchi Huang, Rui-Xiao Zhang, Ruiyu Li and Lifeng Sun. Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating

3. Daniel Peterson, Pallika Kanani and Virendra Marathe. Private Federated Learning with Domain Adaptation

4. Daliang Li and Junpu Wang.FedMD: Heterogenous Federated Learning via Model Distillation

5. Sebastian Caldas, Jakub Konečný, H. Brendan Mcmahan and Ameet Talwalkar.Mitigating the Impact of Federated Learning on Client Resources

6. Jianyu Wang, Anit Sahu, Zhouyi Yang, Gauri Joshi and Soummya Kar.MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling

7. Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan Mcmahan, Virginia Smith and Ameet Talwalkar.Leaf: A Benchmark for Federated Settings

8. Yihan Jiang, Jakub Konečný, Keith Rush and Sreeram Kannan.Improving Federated Learning Personalization via Model Agnostic Meta Learning

9. Zhicong Liang, Bao Wang, Stanley Osher and Yuan Yao.Exploring Private Federated Learning with Laplacian Smoothing

10. Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele and Mario Fritz.Gradient-Leaks: Understanding Deanonymization in Federated Learning

11. Yang Liu, Yan Kang, Xinwei Zhang, Liping Li and Mingyi Hong.A Communication Efficient Vertical Federated Learning Framework

12. Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik.Better Communication Complexity for Local SGD

13. Yang Liu, Xiong Zhang, Shuqi Qin and Xiaoping Lei.Differentially Private Linear Regression over Fully Decentralized Datasets

14. Florian Hartmann, Sunah Suh, Arkadiusz Komarzewski, Tim D. Smith and Ilana Segall. Federated Learning for Ranking Browser History Suggestions

15. Aleksei Triastcyn and Boi Faltings.Federated Learning with Bayesian Differential Privacy

16. Jack Goetz, Kshitiz Malik, Duc Bui, Seungwhan Moon, Honglei Liu and Anuj Kumar.Active Federated Learning

17. Kartikeya Bhardwaj, Wei Chen and Radu Marculescu.FedMAX: Activation Entropy Maximization Targeting Effective Non-IID Federated Learning

18. Mingshu Cong, Zhongming Ou, Yanxin Zhang, Han Yu, Xi Weng, Jiabao Qu, Siu Ming Yiu, Yang Liu and Qiang Yang.Neural Network Optimization for a VCG-based Federated Learning Incentive Mechanism

19. Kai Yang, Tao Fan, Tianjian Chen, Yuanming Shi and Qiang Yang.A Quasi-Newton Method Based Vertical Federated Learning Framework for Logistic Regression

20. Suyi Li, Yong Cheng, Yang Liu and Wei Wang.Abnormal Client Behavior Detection in Federated Learning

21. Songtao Lu, Yawen Zhang, Yunlong Wang and Christina Mack.Learn Electronic Health Records by Fully Decentralized Federated Learning

22. Shicong Cen, Huishuai Zhang, Yuejie Chi, Wei Chen and Tie-Yan Liu.Convergence and Regularization of Distributed Stochastic Variance Reduced Methods

23. Zhaorui Li, Zhicong Huang, Chaochao Chen and Cheng Hong.Quantification of the Leakage in Federated Learning

24. Tzu-Ming Harry Hsu, Hang Qi and Matthew Brown.Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification

25. Boyue Li, Shicong Cen, Yuxin Chen and Yuejie Chi.Communication-Efficient Distributed Optimization in Networks with Gradient Tracking

26. Khaoula El Mekkaoui, Paul Blomstedt, Diego Mesquita and Samuel Kaski.Towards federated stochastic gradient Langevin dynamics

27. Felix Sattler, Klaus-Robert Müller and Wojciech Samek.Clustered Federated Learning

28. Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh and Brendan McMahan.Backdoor Attacks on Federated Learning and Corresponding Defenses

29. Neta Shoham, Tomer Avidor, Aviv Keren, Nadav Israel, Daniel Benditkis, Liron Mor-Yosef and Itai Zeitak.Overcoming Forgetting in Federated Learning on Non-IID Data

30. Ahmed Khaled and Peter Richtárik.Gradient Descent with Compressed Iterates

31. Jiahuan Luo, Xueyang Wu, Yun Luo, Anbu Huang, Yunfeng Huang, Yang Liu and Qiang Yang.Real-World Image Datasets for Federated Learning

32. Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik.First Analysis of Local GD on Heterogeneous Data

33. Dashan Gao, Ce Ju, Xiguang Wei, Yang Liu, Tianjian Chen and Qiang Yang. HHHFL: Hierarchical Heterogeneous Horizontal Federated Learning for Electroencephalography

Opening remarks
Federated Learning for Recommendation Systems (Invited talk)
TBD (Invited talk)
Coffee break and posters (Break)
TBD (Invited talk)
TBD (Invited talk)
Think Locally, Act Globally: Federated Learning with Local and Global Representations (Contributed talk)
FedMD: Heterogenous Federated Learning via Model Distillation (Contributed talk)
Private Federated Learning with Domain Adaptation (Contributed talk)
Improving Federated Learning Personalization via Model Agnostic Meta Learning (Contributed talk)
Lunch break and poster (Break)
TBD (Invited talk)
TBD (Invited talk)
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling (Contributed talk)
Mitigating the Impact of Federated Learning on Client Resources (Contributed talk)
A Communication Efficient Vertical Federated Learning Framework (Contributed talk)
Better Communication Complexity for Local SGD (Contributed talk)
Coffee break and poster (Break)
TBD (Invited talk)
FOCUS: Federate Opportunity Computing for Ubiquitous System (Invited talk)
Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating (Contributed talk)
Exploring Private Federated Learning with Laplacian Smoothing (Contributed talk)
Gradient-Leaks: Understanding Deanonymization in Federated Learning (Contributed talk)
Federated Learning with Bayesian Differential Privacy (Contributed talk)
Panel disucssion (Panel)
Closing Remarks (Closing)