Timezone: »
Overview
Privacy and security have become critical concerns in recent years, particularly as companies and organizations increasingly collect detailed information about their products and users. This information can enable machine learning methods that produce better products. However, it also has the potential to allow for misuse, especially when private data about individuals is involved. Recent research shows that privacy and utility do not necessarily need to be at odds, but can be addressed by careful design and analysis. The need for such research is reinforced by the recent introduction of new legal constraints, led by the European Union’s General Data Protection Regulation (GDPR), which is already inspiring novel legislative approaches around the world such as Cyber-security Law of the People’s Republic of China and The California Consumer Privacy Act of 2018.
An approach that has the potential to address a number of problems in this space is federated learning (FL). FL is an ML setting where many clients (e.g., mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g., service provider), while keeping the training data decentralized. Organizations and mobile devices have access to increasing amounts of sensitive data, with scrutiny of ML privacy and data handling practices increasing correspondingly. These trends have produced significant interest in FL, since it provides a viable path to state-of-the-art ML without the need for the centralized collection of training data – and the risks and responsibilities that come with such centralization. Nevertheless, significant challenges remain open in the FL setting, the solution of which will require novel techniques from multiple fields, as well as improved open-source tooling for both FL research and real-world deployment
This workshop aims to bring together academic researchers and industry practitioners with common interests in this domain. For industry participants, we intend to create a forum to communicate what kind of problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. Overall, the workshop will provide an opportunity to share the most recent and innovative work in FL, and discuss open problems and relevant approaches. The technical issues encouraged to be submitted include general computation based on decentralized data (i.e., not only machine learning), and how such computations can be combined with other research areas, such as differential privacy, secure multi-party computation, computational efficiency, coding theory, etc. Contributions in theory as well as applications are welcome, including proposals for novel system design. Work on fully-decentralized (peer-to-peer) learning will also be considered, as there is significant overlap in both interest and techniques with federated learning.
Call for Contributions
We welcome high quality submissions in the broad area of federated learning (FL). A few (non-exhaustive) topics of interest include:
. Optimization algorithms for FL, particularly communication-efficient algorithms tolerant of non-IID data
. Approaches that scale FL to larger models, including model and gradient compression techniques
. Novel applications of FL
. Theory for FL
. Approaches to enhancing the security and privacy of FL, including cryptographic techniques and differential privacy
. Bias and fairness in the FL setting
. Attacks on FL including model poisoning, and corresponding defenses
. Incentive mechanisms for FL
. Software and systems for FL
. Novel applications of techniques from other fields to the FL setting: information theory, multi-task learning, model-agnostic meta-learning, and etc.
. Work on fully-decentralized (peer-to-peer) learning will also be considered, as there is significant overlap in both interest and techniques with FL.
Submissions in the form of extended abstracts must be at most 4 pages long (not including references), be anonymized, and adhere to the NeurIPS 2019 format. Submissions will be accepted as contributed talks or poster presentations. The workshop will not have formal proceedings, but accepted papers will be posted on the workshop website.
We support reproducible research and will sponsor a prize to be given to the best contribution that provides code to reproduce their results.
Submission link: https://easychair.org/conferences/?conf=flneurips2019
Important Dates (2019)
Submission deadline: Sep 9
Author notification: Sep 30
Camera-Ready Papers Due: TBD
Workshop: Dec 13
Organizers:
Lixin Fan, WeBank
Jakub Konečný, Google
Yang Liu, WeBank
Brendan McMahan, Google
Virginia Smith, CMU
Han Yu, NTU
Invited Speakers:
Francoise Beaufays, Principal Researcher, Google
Shahrokh Daijavad, Distinguished Research, IBM
Dawn Song, Professor, University of California, Berkeley
Ameet Talwalkar, Assistant Professor, CMU; Chief Scientist, Determined AI
Max Welling, Professor, University of Amsterdam; VP Technologies, Qualcomm
Qiang Yang, Hong Kong University of Science and Technology, Hong Kong; Chief AI Officer, WeBank
FAQ
Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?
Yes, you may include additional supplementary material, but you should ensure that the main paper is self-contained, since looking at supplementary material is at the discretion of the reviewers. The supplementary material should also follow the same NeurIPS format as the paper and be limited to a reasonable amount (max 10 pages in addition to the main submission).
Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?
We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.
Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?
We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).
Can a paper be submitted to the workshop that is currently under review or will be under review at a conference during the review phase?
It is fine to submit a condensed version (i.e., 4 pages) of a parallel conference submission, if it also fine for the conference in question. Our workshop does not have archival proceedings, and therefore parallel submissions of extended versions to other conferences are acceptable.
=====================================================
Accepted papers:
1. Paul Pu Liang, Terrance Liu, Liu Ziyin, Russ Salakhutdinov and Louis-Philippe Morency. Think Locally, Act Globally: Federated Learning with Local and Global Representations
2. Xin Yao, Tianchi Huang, Rui-Xiao Zhang, Ruiyu Li and Lifeng Sun. Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating
3. Daniel Peterson, Pallika Kanani and Virendra Marathe. Private Federated Learning with Domain Adaptation
4. Daliang Li and Junpu Wang.FedMD: Heterogenous Federated Learning via Model Distillation
5. Sebastian Caldas, Jakub Konečný, H. Brendan Mcmahan and Ameet Talwalkar.Mitigating the Impact of Federated Learning on Client Resources
6. Jianyu Wang, Anit Sahu, Zhouyi Yang, Gauri Joshi and Soummya Kar.MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
7. Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan Mcmahan, Virginia Smith and Ameet Talwalkar.Leaf: A Benchmark for Federated Settings
8. Yihan Jiang, Jakub Konečný, Keith Rush and Sreeram Kannan.Improving Federated Learning Personalization via Model Agnostic Meta Learning
9. Zhicong Liang, Bao Wang, Stanley Osher and Yuan Yao.Exploring Private Federated Learning with Laplacian Smoothing
10. Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele and Mario Fritz.Gradient-Leaks: Understanding Deanonymization in Federated Learning
11. Yang Liu, Yan Kang, Xinwei Zhang, Liping Li and Mingyi Hong.A Communication Efficient Vertical Federated Learning Framework
12. Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik.Better Communication Complexity for Local SGD
13. Yang Liu, Xiong Zhang, Shuqi Qin and Xiaoping Lei.Differentially Private Linear Regression over Fully Decentralized Datasets
14. Florian Hartmann, Sunah Suh, Arkadiusz Komarzewski, Tim D. Smith and Ilana Segall. Federated Learning for Ranking Browser History Suggestions
15. Aleksei Triastcyn and Boi Faltings.Federated Learning with Bayesian Differential Privacy
16. Jack Goetz, Kshitiz Malik, Duc Bui, Seungwhan Moon, Honglei Liu and Anuj Kumar.Active Federated Learning
17. Kartikeya Bhardwaj, Wei Chen and Radu Marculescu.FedMAX: Activation Entropy Maximization Targeting Effective Non-IID Federated Learning
18. Mingshu Cong, Zhongming Ou, Yanxin Zhang, Han Yu, Xi Weng, Jiabao Qu, Siu Ming Yiu, Yang Liu and Qiang Yang.Neural Network Optimization for a VCG-based Federated Learning Incentive Mechanism
19. Kai Yang, Tao Fan, Tianjian Chen, Yuanming Shi and Qiang Yang.A Quasi-Newton Method Based Vertical Federated Learning Framework for Logistic Regression
20. Suyi Li, Yong Cheng, Yang Liu and Wei Wang.Abnormal Client Behavior Detection in Federated Learning
21. Songtao Lu, Yawen Zhang, Yunlong Wang and Christina Mack.Learn Electronic Health Records by Fully Decentralized Federated Learning
22. Shicong Cen, Huishuai Zhang, Yuejie Chi, Wei Chen and Tie-Yan Liu.Convergence and Regularization of Distributed Stochastic Variance Reduced Methods
23. Zhaorui Li, Zhicong Huang, Chaochao Chen and Cheng Hong.Quantification of the Leakage in Federated Learning
24. Tzu-Ming Harry Hsu, Hang Qi and Matthew Brown.Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification
25. Boyue Li, Shicong Cen, Yuxin Chen and Yuejie Chi.Communication-Efficient Distributed Optimization in Networks with Gradient Tracking
26. Khaoula El Mekkaoui, Paul Blomstedt, Diego Mesquita and Samuel Kaski.Towards federated stochastic gradient Langevin dynamics
27. Felix Sattler, Klaus-Robert Müller and Wojciech Samek.Clustered Federated Learning
28. Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh and Brendan McMahan.Backdoor Attacks on Federated Learning and Corresponding Defenses
29. Neta Shoham, Tomer Avidor, Aviv Keren, Nadav Israel, Daniel Benditkis, Liron Mor-Yosef and Itai Zeitak.Overcoming Forgetting in Federated Learning on Non-IID Data
30. Ahmed Khaled and Peter Richtárik.Gradient Descent with Compressed Iterates
31. Jiahuan Luo, Xueyang Wu, Yun Luo, Anbu Huang, Yunfeng Huang, Yang Liu and Qiang Yang.Real-World Image Datasets for Federated Learning
32. Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik.First Analysis of Local GD on Heterogeneous Data
33. Dashan Gao, Ce Ju, Xiguang Wei, Yang Liu, Tianjian Chen and Qiang Yang. HHHFL: Hierarchical Heterogeneous Horizontal Federated Learning for Electroencephalography
Fri 8:55 a.m. - 9:00 a.m.
|
Opening remarks
|
Lixin Fan 🔗 |
Fri 9:00 a.m. - 9:30 a.m.
|
Federated Learning for Recommendation Systems
(
Invited talk
)
|
Qiang Yang 🔗 |
Fri 9:30 a.m. - 10:00 a.m.
|
TBD
(
Invited talk
)
|
Ameet Talwalkar 🔗 |
Fri 10:00 a.m. - 10:30 a.m.
|
Coffee break and posters
|
🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
TBD
(
Invited talk
)
|
Max Welling 🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
TBD
(
Invited talk
)
|
Dawn Song 🔗 |
Fri 11:30 a.m. - 11:40 a.m.
|
Think Locally, Act Globally: Federated Learning with Local and Global Representations
(
Contributed talk
)
|
🔗 |
Fri 11:40 a.m. - 11:50 a.m.
|
FedMD: Heterogenous Federated Learning via Model Distillation
(
Contributed talk
)
|
🔗 |
Fri 11:50 a.m. - 12:00 p.m.
|
Private Federated Learning with Domain Adaptation
(
Contributed talk
)
|
🔗 |
Fri 12:00 p.m. - 12:10 p.m.
|
Improving Federated Learning Personalization via Model Agnostic Meta Learning
(
Contributed talk
)
|
🔗 |
Fri 12:10 p.m. - 1:30 p.m.
|
Lunch break and poster
|
Felix Sattler · Khaoula El Mekkaoui · Neta Shoham · Cheng Hong · Florian Hartmann · Boyue Li · Daliang Li · Sebastian Caldas Rivera · Jianyu Wang · Kartikeya Bhardwaj · Tribhuvanesh Orekondy · YAN KANG · Dashan Gao · Mingshu Cong · Xin Yao · Songtao Lu · JIAHUAN LUO · Shicong Cen · Peter Kairouz · Yihan Jiang · Tzu Ming Hsu · Aleksei Triastcyn · Yang Liu · Ahmed Khaled Ragab Bayoumi · Zhicong Liang · Boi Faltings · Seungwhan Moon · Suyi Li · Tao Fan · Tianchi Huang · Chunyan Miao · Hang Qi · Matthew Brown · Lucas Glass · Junpu Wang · Wei Chen · Radu Marculescu · tomer avidor · Xueyang Wu · Mingyi Hong · Ce Ju · John Rush · Ruixiao Zhang · Youchi ZHOU · Françoise Beaufays · Yingxuan Zhu · Lei Xia
|
Fri 1:30 p.m. - 2:00 p.m.
|
TBD
(
Invited talk
)
|
Daniel Ramage 🔗 |
Fri 2:20 p.m. - 2:50 p.m.
|
TBD
(
Invited talk
)
|
Françoise Beaufays 🔗 |
Fri 2:30 p.m. - 2:40 p.m.
|
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
(
Contributed talk
)
|
🔗 |
Fri 2:40 p.m. - 2:50 p.m.
|
Mitigating the Impact of Federated Learning on Client Resources
(
Contributed talk
)
|
🔗 |
Fri 2:50 p.m. - 3:00 p.m.
|
A Communication Efficient Vertical Federated Learning Framework
(
Contributed talk
)
|
🔗 |
Fri 3:00 p.m. - 3:10 p.m.
|
Better Communication Complexity for Local SGD
(
Contributed talk
)
|
🔗 |
Fri 3:10 p.m. - 3:30 p.m.
|
Coffee break and poster
|
🔗 |
Fri 3:30 p.m. - 4:00 p.m.
|
TBD
(
Invited talk
)
|
Raluca Ada Popa 🔗 |
Fri 4:30 p.m. - 5:00 p.m.
|
FOCUS: Federate Opportunity Computing for Ubiquitous System
(
Invited talk
)
|
Yiqiang Chen 🔗 |
Fri 5:00 p.m. - 5:10 p.m.
|
Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating
(
Contributed talk
)
|
🔗 |
Fri 5:10 p.m. - 5:20 p.m.
|
Exploring Private Federated Learning with Laplacian Smoothing
(
Contributed talk
)
|
🔗 |
Fri 5:20 p.m. - 5:30 p.m.
|
Gradient-Leaks: Understanding Deanonymization in Federated Learning
(
Contributed talk
)
|
🔗 |
Fri 5:30 p.m. - 5:40 p.m.
|
Federated Learning with Bayesian Differential Privacy
(
Contributed talk
)
|
🔗 |
Fri 5:40 p.m. - 6:10 p.m.
|
Panel disucssion
(
Panel
)
|
🔗 |
Fri 6:10 p.m. - 6:15 p.m.
|
Closing Remarks
(
Closing
)
|
🔗 |
Author Information
Lixin Fan (WeBank AI Lab)
Dr. Lixin Fan is a Principal Scientist affiliated with WeBank, China. His research areas of interests include machine learning & deep learning, computer vision & pattern recognition, image and video processing, 3D big data processing, data visualization & rendering, augmented and virtual reality, mobile ubiquitous and pervasive computing, and intelligent human-computer interface. Dr. Fan is the (co-)author of more than 60 international journal & conference publications. He also (co-)invented more than a hundred granted and pending patents filed in US, Europe, and China. Before joining WeBank, Dr. Fan was affiliated with Nokia Technologies and Xerox Research Center Europe (XRCE). His research work included the well-recognized bag of key-points method for image categorization.
Jakub Konečný (Google Research)
Yang Liu (Webank)
Brendan McMahan (Google)
Virginia Smith (Carnegie Mellon University)
Han Yu (Nanyang Technological University (NTU))
More from the Same Authors
-
2022 : Differentially Private Adaptive Optimization with Delayed Preconditioners »
Tian Li · Manzil Zaheer · Ken Liu · Sashank Reddi · H. Brendan McMahan · Virginia Smith -
2022 : Differentially Private Adaptive Optimization with Delayed Preconditioners »
Tian Li · Manzil Zaheer · Ken Liu · Sashank Reddi · H. Brendan McMahan · Virginia Smith -
2022 : Motley: Benchmarking Heterogeneity and Personalization in Federated Learning »
Shanshan Wu · Tian Li · Zachary Charles · Yu Xiao · Ken Liu · Zheng Xu · Virginia Smith -
2022 : Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts »
Amrith Setlur · Don Dennis · Benjamin Eysenbach · Aditi Raghunathan · Chelsea Finn · Virginia Smith · Sergey Levine -
2023 Poster: Progressive Knowledge Distillation: Constructing Ensembles for Efficient Inference »
Don Dennis · Abhishek Shetty · Anish Prasad Sevekari · Kazuhito Koishida · Virginia Smith -
2023 Poster: Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift »
Saurabh Garg · Amrith Setlur · Zachary Lipton · Sivaraman Balakrishnan · Virginia Smith · Aditi Raghunathan -
2023 Poster: Variance-Reduced Gradient Estimation via Noise-Reuse in Online Evolution Strategies »
Oscar Li · James Harrison · Jascha Sohl-Dickstein · Virginia Smith · Luke Metz -
2023 Poster: Recurrent Temporal Revision Graph Networks »
Yizhou Chen · Anxiang Zeng · Qingtao Yu · Kerui Zhang · Cao Yuanpeng · Kangle Wu · Guangda Huzhang · Han Yu · Zhiming Zhou -
2023 Workshop: Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS'23) »
Jinghui Chen · Lixin Fan · Gauri Joshi · Sai Praneeth Karimireddy · Stacy Patterson · Shiqiang Wang · Han Yu -
2023 : Evaluating Large-Scale Learning Systems, Virginia Smith »
Virginia Smith -
2022 : Panel »
Virginia Smith · Michele Covell · Daniel Severo · Christopher Schroers -
2022 : To Federate or Not To Federate: Incentivizing Client Participation in Federated Learning »
Yae Jee Cho · Divyansh Jhunjhunwala · Tian Li · Virginia Smith · Gauri Joshi -
2022 Workshop: Federated Learning: Recent Advances and New Challenges »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2022 Poster: On Privacy and Personalization in Cross-Silo Federated Learning »
Ken Liu · Shengyuan Hu · Steven Wu · Virginia Smith -
2022 Poster: Adversarial Unlearning: Reducing Confidence Along Adversarial Directions »
Amrith Setlur · Benjamin Eysenbach · Virginia Smith · Sergey Levine -
2021 : Q&A with A/Professor Virginia Smith »
Virginia Smith -
2021 : Keynote Talk: Fair or Robust: Addressing Competing Constraints in Federated Learning (Virginia Smith) »
Virginia Smith -
2021 Poster: Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution »
Amrith Setlur · Oscar Li · Virginia Smith -
2021 Poster: Differentially Private Learning with Adaptive Clipping »
Galen Andrew · Om Thakkar · Brendan McMahan · Swaroop Ramaswamy -
2021 Poster: On Large-Cohort Training for Federated Learning »
Zachary Charles · Zachary Garrett · Zhouyuan Huo · Sergei Shmulyian · Virginia Smith -
2021 Poster: Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing »
Mikhail Khodak · Renbo Tu · Tian Li · Liam Li · Maria-Florina Balcan · Virginia Smith · Ameet Talwalkar -
2020 Tutorial: (Track1) Federated Learning and Analytics: Industry Meets Academia Q&A »
Peter Kairouz · Brendan McMahan · Virginia Smith -
2020 Poster: Privacy Amplification via Random Check-Ins »
Borja Balle · Peter Kairouz · Brendan McMahan · Om Thakkar · Abhradeep Guha Thakurta -
2020 Tutorial: (Track1) Federated Learning and Analytics: Industry Meets Academia »
Brendan McMahan · Virginia Smith · Peter Kairouz -
2019 : Privacy for Federated Learning, and Federated Learning for Privacy »
Brendan McMahan -
2019 : Opening remarks »
Lixin Fan -
2019 Poster: Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks »
Lixin Fan · Kam Woh Ng · Chee Seng Chan -
2018 : Prof. Virginia Smith »
Virginia Smith -
2018 : Brendan McMahan »
Brendan McMahan -
2018 Workshop: NIPS 2018 workshop on Compact Deep Neural Networks with industrial applications »
Lixin Fan · Zhouchen Lin · Max Welling · Yurong Chen · Werner Bailer -
2018 Poster: Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization »
Blake Woodworth · Jialei Wang · Adam Smith · Brendan McMahan · Nati Srebro -
2018 Spotlight: Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization »
Blake Woodworth · Jialei Wang · Adam Smith · Brendan McMahan · Nati Srebro -
2018 Poster: cpSGD: Communication-efficient and differentially-private distributed SGD »
Naman Agarwal · Ananda Theertha Suresh · Felix Xinnan Yu · Sanjiv Kumar · Brendan McMahan -
2018 Spotlight: cpSGD: Communication-efficient and differentially-private distributed SGD »
Naman Agarwal · Ananda Theertha Suresh · Felix Xinnan Yu · Sanjiv Kumar · Brendan McMahan -
2014 Poster: Delay-Tolerant Algorithms for Asynchronous Distributed Online Learning »
Brendan McMahan · Matthew Streeter -
2013 Poster: Minimax Optimal Algorithms for Unconstrained Linear Optimization »
Brendan McMahan · Jacob D Abernethy -
2013 Poster: Estimation, Optimization, and Parallelism when Data is Sparse »
John Duchi · Michael Jordan · Brendan McMahan -
2012 Poster: No-Regret Algorithms for Unconstrained Online Convex Optimization »
Matthew Streeter · Brendan McMahan