Timezone: »
Vertical Federated Learning (VFL) is a distributed learning paradigm that allows multiple agents to jointly train a global model when each agent holds a different subset of features for the same sample(s). VFL is known to be vulnerable to backdoor attacks. However, unlike the standard horizontal federated learning, improving the robustness of VFL remains challenging. To this end, we propose RVFR, a novel robust VFL training and inference framework. The key to our approach is to ensure that with a low-rank feature subspace, a small number of attacked samples, and other mild assumptions, RVFR recovers the underlying uncorrupted features with guarantees, thus sanitizes the model against a vast range of backdoor attacks. Further, RVFR also defends against inference-time adversarial feature attack. Our empirical studies further corroborate the robustness of the proposed framework.
Author Information
Jing Liu (UIUC)
Chulin Xie (University of Illinois at Urbana-Champaign)
Krishnaram Kenthapadi (Amazon)
Sanmi Koyejo (University of Illinois at Urbana-Champaign & Google Research)

Sanmi Koyejo is an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign and a research scientist at Google AI in Accra. Koyejo's research interests are in developing the principles and practice of adaptive and robust machine learning. Additionally, Koyejo focuses on applications to biomedical imaging and neuroscience. Koyejo co-founded the Black in AI organization and currently serves on its board.
Bo Li (UIUC)
More from the Same Authors
-
2021 : Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models »
Boxin Wang · Chejian Xu · Shuohang Wang · Zhe Gan · Yu Cheng · Jianfeng Gao · Ahmed Awadallah · Bo Li -
2021 : Defuse: Training More Robust Models through Creation and Correction of Novel Model Errors »
Dylan Slack · Krishnaram Kenthapadi -
2021 : Probabilistic Performance Metric Elicitation »
Zachary Robertson · Hantao Zhang · Sanmi Koyejo -
2021 : Certified Robustness for Free in Differentially Private Federated Learning »
Chulin Xie · Yunhui Long · Pin-Yu Chen · Krishnaram Kenthapadi · Bo Li -
2021 : Robust and Personalized Federated Learning with Spurious Features: an Adversarial Approach »
Xiaoyang Wang · Han Zhao · Klara Nahrstedt · Sanmi Koyejo -
2021 : Secure Byzantine-Robust Distributed Learning via Clustering »
Raj Kiriti Velicheti · Sanmi Koyejo -
2021 : Exploiting Causal Chains for Domain Generalization »
Olawale Salaudeen · Sanmi Koyejo -
2021 : Distribution Preserving Bayesian Coresets using Set Constraints »
Shovik Guha · Rajiv Khanna · Sanmi Koyejo -
2021 : What Would Jiminy Cricket Do? Towards Agents That Behave Morally »
Dan Hendrycks · Mantas Mazeika · Andy Zou · Sahil Patel · Christine Zhu · Jesus Navarro · Dawn Song · Bo Li · Jacob Steinhardt -
2021 : [S6] Defuse: Training More Robust Models through Creation and Correction of Novel Model Errors »
Dylan Slack · Krishnaram Kenthapadi -
2021 : Career and Life: Panel Discussion - Bo Li, Adriana Romero-Soriano, Devi Parikh, and Emily Denton »
Emily Denton · Devi Parikh · Bo Li · Adriana Romero -
2021 : Live Q&A with Bo Li »
Bo Li -
2021 : Invited talk – Trustworthy Machine Learning via Logic Inference, Bo Li »
Bo Li -
2021 : Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models »
Boxin Wang · Chejian Xu · Shuohang Wang · Zhe Gan · Yu Cheng · Jianfeng Gao · Ahmed Awadallah · Bo Li -
2021 Poster: G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators »
Yunhui Long · Boxin Wang · Zhuolin Yang · Bhavya Kailkhura · Aston Zhang · Carl Gunter · Bo Li -
2021 Poster: Anti-Backdoor Learning: Training Clean Models on Poisoned Data »
Yige Li · Xixiang Lyu · Nodens Koren · Lingjuan Lyu · Bo Li · Xingjun Ma -
2021 Poster: Adversarial Attack Generation Empowered by Min-Max Optimization »
Jingkang Wang · Tianyun Zhang · Sijia Liu · Pin-Yu Chen · Jiacen Xu · Makan Fardad · Bo Li -
2021 : Reconnaissance Blind Chess + Q&A »
Ryan Gardner · Gino Perrotta · Corey Lowman · Casey Richardson · Andrew Newman · Jared Markowitz · Nathan Drenkow · Bart Paulhamus · Ashley J Llorens · Todd Neller · Raman Arora · Bo Li · Mykel J Kochenderfer -
2021 Poster: TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness »
Zhuolin Yang · Linyi Li · Xiaojun Xu · Shiliang Zuo · Qian Chen · Pan Zhou · Benjamin Rubinstein · Ce Zhang · Bo Li -
2020 Workshop: Workshop on Dataset Curation and Security »
Nathalie Baracaldo Angel · Yonatan Bisk · Avrim Blum · Michael Curry · John Dickerson · Micah Goldblum · Tom Goldstein · Bo Li · Avi Schwarzschild -
2020 Poster: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Huan Zhang · Hongge Chen · Chaowei Xiao · Bo Li · Mingyan Liu · Duane Boning · Cho-Jui Hsieh -
2020 Spotlight: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Huan Zhang · Hongge Chen · Chaowei Xiao · Bo Li · Mingyan Liu · Duane Boning · Cho-Jui Hsieh -
2020 Poster: On Convergence of Nearest Neighbor Classifiers over Feature Transformations »
Luka Rimanic · Cedric Renggli · Bo Li · Ce Zhang -
2020 Expo Talk Panel: Fairness, Explainability, and Privacy in AI/ML Systems »
Vidya Ravipati · Erika Pelaez Coyotl · Ujjwal Ratan · Krishnaram Kenthapadi -
2019 Tutorial: Representation Learning and Fairness »
Moustapha Cisse · Sanmi Koyejo