Timezone: »
Diffusion-based image generation models, such as Stable Diffusion or DALL·E 2, are able to learn from given images and generate high-quality samples following the guidance from prompts. For instance, they can be used to create artistic images that mimic the style of an artist based on his/her original artworks or to maliciously edit the original images for fake content. However, such ability also brings serious ethical issues without proper authorization from the owner of the original images. In response, several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations, which are designed to mislead the diffusion model and make it unable to properly generate new samples. In this work, we introduce a perturbation purification platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.IMPRESS is based on the key observation that imperceptible perturbations could lead to a perceptible inconsistency between the original image and the diffusion-reconstructed image, which can be used to devise a new optimization strategy for purifying the image, which may weaken the protection of the original image from unauthorized data usage (e.g., style mimicking, malicious editing).The proposed IMPRESS platform offers a comprehensive evaluation of several contemporary protection methods, and can be used as an evaluation platform for future protection methods.
Author Information
Bochuan Cao (The Pennsylvania State University)
Changjiang Li (Stony Brook University)
Ting Wang (Stony Brook)
Jinyuan Jia (The Pennsylvania State University)
Bo Li (UChicago/UIUC)
Jinghui Chen (Penn State University)
More from the Same Authors
-
2021 : Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models »
Boxin Wang · Chejian Xu · Shuohang Wang · Zhe Gan · Yu Cheng · Jianfeng Gao · Ahmed Awadallah · Bo Li -
2021 : Certified Robustness for Free in Differentially Private Federated Learning »
Chulin Xie · Yunhui Long · Pin-Yu Chen · Krishnaram Kenthapadi · Bo Li -
2021 : RVFR: Robust Vertical Federated Learning via Feature Subspace Recovery »
Jing Liu · Chulin Xie · Krishnaram Kenthapadi · Sanmi Koyejo · Bo Li -
2021 : What Would Jiminy Cricket Do? Towards Agents That Behave Morally »
Dan Hendrycks · Mantas Mazeika · Andy Zou · Sahil Patel · Christine Zhu · Jesus Navarro · Dawn Song · Bo Li · Jacob Steinhardt -
2022 Poster: VF-PS: How to Select Important Participants in Vertical Federated Learning, Efficiently and Securely? »
Jiawei Jiang · Lukas Burkhalter · Fangcheng Fu · Bolin Ding · Bo Du · Anwar Hithnawi · Bo Li · Ce Zhang -
2022 : On the Vulnerability of Backdoor Defenses for Federated Learning »
Pei Fang · Jinghui Chen -
2022 : Accelerating Adaptive Federated Optimization with Local Gossip Communications »
Yujia Wang · Pei Fang · Jinghui Chen -
2022 : Improving Vertical Federated Learning by Efficient Communication with ADMM »
Chulin Xie · Pin-Yu Chen · Ce Zhang · Bo Li -
2022 : Benchmarking Robustness under Distribution Shift of Multimodal Image-Text Models »
Jielin Qiu · Yi Zhu · Xingjian Shi · Zhiqiang Tang · DING ZHAO · Bo Li · Mu Li -
2022 : DensePure: Understanding Diffusion Models towards Adversarial Robustness »
Zhongzhu Chen · Kun Jin · Jiongxiao Wang · Weili Nie · Mingyan Liu · Anima Anandkumar · Bo Li · Dawn Song -
2022 : Fifteen-minute Competition Overview Video »
Nathan Drenkow · Raman Arora · Gino Perrotta · Todd Neller · Ryan Gardner · Mykel J Kochenderfer · Jared Markowitz · Corey Lowman · Casey Richardson · Bo Li · Bart Paulhamus · Ashley J Llorens · Andrew Newman -
2022 : Spectrum Guided Topology Augmentation for Graph Contrastive Learning »
Lu Lin · Jinghui Chen · Hongning Wang -
2022 : How Powerful is Implicit Denoising in Graph Neural Networks »
Songtao Liu · Rex Ying · Hanze Dong · Lu Lin · Jinghui Chen · Dinghao Wu -
2022 : On the Robustness of Safe Reinforcement Learning under Observational Perturbations »
ZUXIN LIU · Zijian Guo · Zhepeng Cen · Huan Zhang · Jie Tan · Bo Li · DING ZHAO -
2023 : FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data »
Wenda Chu · Chulin Xie · Boxin Wang · Linyi Li · Lang Yin · Arash Nourian · Han Zhao · Bo Li -
2023 : Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications »
Fengqing Jiang · Zhangchen Xu · Luyao Niu · Boxin Wang · Jinyuan Jia · Bo Li · Radha Poovendran -
2023 Competition: TDC 2023 (LLM Edition): The Trojan Detection Challenge »
Mantas Mazeika · Andy Zou · Norman Mu · Long Phan · Zifan Wang · Chunru Yu · Adam Khoja · Fengqing Jiang · Aidan O'Gara · Zhen Xiang · Arezoo Rajabi · Dan Hendrycks · Radha Poovendran · Bo Li · David Forsyth -
2023 Workshop: Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS'23) »
Jinghui Chen · Lixin Fan · Gauri Joshi · Sai Praneeth Karimireddy · Stacy Patterson · Shiqiang Wang · Han Yu -
2023 : Decoding Backdoors in LLMs and Their Implications »
Bo Li -
2023 : BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models »
Zhen Xiang · Fengqing Jiang · Zidi Xiong · Bhaskar Ramasubramanian · Radha Poovendran · Bo Li -
2023 Poster: VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models »
Ziyi Yin · Muchao Ye · Tianrong Zhang · Tianyu Du · Jinguo Zhu · Han Liu · Jinghui Chen · Ting Wang · Fenglong Ma -
2023 Poster: FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning »
Jinyuan Jia · Zhuowen Yuan · Dinuka Sahabandu · Luyao Niu · Arezoo Rajabi · Bhaskar Ramasubramanian · Bo Li · Radha Poovendran -
2023 Poster: A3FL: Adversarially Adaptive Backdoor Attacks to Federated Learning »
Hangfan Zhang · Jinyuan Jia · Jinghui Chen · Lu Lin · Dinghao Wu -
2023 Poster: DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification »
Mintong Kang · Dawn Song · Bo Li -
2023 Poster: Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand »
Junfeng Guo · Yiming Li · Lixu Wang · Shu-Tao Xia · Heng Huang · Cong Liu · Bo Li -
2023 Poster: CBD: A Certified Backdoor Detector Based on Local Dominant Probability »
Zhen Xiang · Zidi Xiong · Bo Li -
2023 Poster: Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks »
Zhaohan Xi · Tianyu Du · Changjiang Li · Ren Pang · Shouling Ji · Jinghui Chen · Fenglong Ma · Ting Wang -
2023 Poster: Incentives in Federated Learning: Equilibria, Dynamics, and Mechanisms for Welfare Maximization »
Aniket Murhekar · Zhuowen Yuan · Bhaskar Ray Chaudhury · Bo Li · Ruta Mehta -
2023 Poster: UniT: A Unified Look at Certified Robust Training against Text Adversarial Perturbation »
Muchao Ye · Ziyi Yin · Tianrong Zhang · Tianyu Du · Jinghui Chen · Ting Wang · Fenglong Ma -
2023 Poster: WordScape: a Pipeline to extract multilingual, visually rich Documents with Layout Annotations from Web Crawl Data »
Maurice Weber · Carlo Siebenschuh · Rory Butler · Anton Alexandrov · Valdemar Thanner · Georgios Tsolakis · Haris Jabbar · Ian Foster · Bo Li · Rick Stevens · Ce Zhang -
2023 Poster: DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models »
Boxin Wang · Weixin Chen · Hengzhi Pei · Chulin Xie · Mintong Kang · Chenhui Zhang · Chejian Xu · Zidi Xiong · Ritik Dutta · Rylan Schaeffer · Sang Truong · Simran Arora · Mantas Mazeika · Dan Hendrycks · Zinan Lin · Yu Cheng · Sanmi Koyejo · Dawn Song · Bo Li -
2023 Oral: DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models »
Boxin Wang · Weixin Chen · Hengzhi Pei · Chulin Xie · Mintong Kang · Chenhui Zhang · Chejian Xu · Zidi Xiong · Ritik Dutta · Rylan Schaeffer · Sang Truong · Simran Arora · Mantas Mazeika · Dan Hendrycks · Zinan Lin · Yu Cheng · Sanmi Koyejo · Dawn Song · Bo Li -
2022 : Contributed Talk: DensePure: Understanding Diffusion Models towards Adversarial Robustness »
Zhongzhu Chen · Kun Jin · Jiongxiao Wang · Weili Nie · Mingyan Liu · Anima Anandkumar · Bo Li · Dawn Song -
2022 Workshop: Trustworthy and Socially Responsible Machine Learning »
Huan Zhang · Linyi Li · Chaowei Xiao · J. Zico Kolter · Anima Anandkumar · Bo Li -
2022 Spotlight: Fairness in Federated Learning via Core-Stability »
Bhaskar Ray Chaudhury · Linyi Li · Mintong Kang · Bo Li · Ruta Mehta -
2022 Competition: The Trojan Detection Challenge »
Mantas Mazeika · Dan Hendrycks · Huichen Li · Xiaojun Xu · Andy Zou · Sidney Hough · Arezoo Rajabi · Dawn Song · Radha Poovendran · Bo Li · David Forsyth -
2022 Spotlight: LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness »
Xiaojun Xu · Linyi Li · Bo Li -
2022 Spotlight: Lightning Talks 5B-1 »
Devansh Arpit · Xiaojun Xu · Zifan Shi · Ivan Skorokhodov · Shayan Shekarforoush · Zhan Tong · Yiqun Wang · Shichong Peng · Linyi Li · Ivan Skorokhodov · Huan Wang · Yibing Song · David Lindell · Yinghao Xu · Seyed Alireza Moazenipourasil · Sergey Tulyakov · Peter Wonka · Yiqun Wang · Ke Li · David Fleet · Yujun Shen · Yingbo Zhou · Bo Li · Jue Wang · Peter Wonka · Marcus Brubaker · Caiming Xiong · Limin Wang · Deli Zhao · Qifeng Chen · Dit-Yan Yeung -
2022 Competition: Reconnaissance Blind Chess: An Unsolved Challenge for Multi-Agent Decision Making Under Uncertainty »
Ryan Gardner · Gino Perrotta · Corey Lowman · Casey Richardson · Andrew Newman · Jared Markowitz · Nathan Drenkow · Bart Paulhamus · Ashley J Llorens · Todd Neller · Raman Arora · Bo Li · Mykel J Kochenderfer -
2022 Spotlight: Certifying Some Distributional Fairness with Subpopulation Decomposition »
Mintong Kang · Linyi Li · Maurice Weber · Yang Liu · Ce Zhang · Bo Li -
2022 Spotlight: Lightning Talks 1A-4 »
Siwei Wang · Jing Liu · Nianqiao Ju · Shiqian Li · Eloïse Berthier · Muhammad Faaiz Taufiq · Arsene Fansi Tchango · Chen Liang · Chulin Xie · Jordan Awan · Jean-Francois Ton · Ziad Kobeissi · Wenguan Wang · Xinwang Liu · Kewen Wu · Rishab Goel · Jiaxu Miao · Suyuan Liu · Julien Martel · Ruobin Gong · Francis Bach · Chi Zhang · Rob Cornish · Sanmi Koyejo · Zhi Wen · Yee Whye Teh · Yi Yang · Jiaqi Jin · Bo Li · Yixin Zhu · Vinayak Rao · Wenxuan Tu · Gaetan Marceau Caron · Arnaud Doucet · Xinzhong Zhu · Joumana Ghosn · En Zhu -
2022 Spotlight: Lightning Talks 1A-3 »
Kimia Noorbakhsh · Ronan Perry · Qi Lyu · Jiawei Jiang · Christian Toth · Olivier Jeunen · Xin Liu · Yuan Cheng · Lei Li · Manuel Rodriguez · Julius von Kügelgen · Lars Lorch · Nicolas Donati · Lukas Burkhalter · Xiao Fu · Zhongdao Wang · Songtao Feng · Ciarán Gilligan-Lee · Rishabh Mehrotra · Fangcheng Fu · Jing Yang · Bernhard Schölkopf · Ya-Li Li · Christian Knoll · Maks Ovsjanikov · Andreas Krause · Shengjin Wang · Hong Zhang · Mounia Lalmas · Bolin Ding · Bo Du · Yingbin Liang · Franz Pernkopf · Robert Peharz · Anwar Hithnawi · Julius von Kügelgen · Bo Li · Ce Zhang -
2022 Spotlight: VF-PS: How to Select Important Participants in Vertical Federated Learning, Efficiently and Securely? »
Jiawei Jiang · Lukas Burkhalter · Fangcheng Fu · Bolin Ding · Bo Du · Anwar Hithnawi · Bo Li · Ce Zhang -
2022 Spotlight: CoPur: Certifiably Robust Collaborative Inference via Feature Purification »
Jing Liu · Chulin Xie · Sanmi Koyejo · Bo Li -
2022 : Panel »
Pin-Yu Chen · Alex Gittens · Bo Li · Celia Cintas · Hilde Kuehne · Payel Das -
2022 : Trustworthy Machine Learning in Autonomous Driving »
Bo Li -
2022 Workshop: Decentralization and Trustworthy Machine Learning in Web3: Methodologies, Platforms, and Applications »
Jian Lou · Zhiguang Wang · Chejian Xu · Bo Li · Dawn Song -
2022 : Invited Talk #5, Privacy-Preserving Data Synthesis for General Purposes, Bo Li »
Bo Li -
2022 : Fairness Panel »
Freedom Gumedze · Rachel Cummings · Bo Li · Robert Tillman · Edward Choi -
2022 : Trustworthy Federated Learning »
Bo Li -
2022 Poster: Improving Certified Robustness via Statistical Learning with Logical Reasoning »
Zhuolin Yang · Zhikuan Zhao · Boxin Wang · Jiawei Zhang · Linyi Li · Hengzhi Pei · Bojan Karlaš · Ji Liu · Heng Guo · Ce Zhang · Bo Li -
2022 Poster: Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection »
Yiming Li · Yang Bai · Yong Jiang · Yong Yang · Shu-Tao Xia · Bo Li -
2022 Poster: Fairness in Federated Learning via Core-Stability »
Bhaskar Ray Chaudhury · Linyi Li · Mintong Kang · Bo Li · Ruta Mehta -
2022 Poster: Generalizing Goal-Conditioned Reinforcement Learning with Variational Causal Reasoning »
Wenhao Ding · Haohong Lin · Bo Li · DING ZHAO -
2022 Poster: Certifying Some Distributional Fairness with Subpopulation Decomposition »
Mintong Kang · Linyi Li · Maurice Weber · Yang Liu · Ce Zhang · Bo Li -
2022 Poster: LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness »
Xiaojun Xu · Linyi Li · Bo Li -
2022 Poster: CoPur: Certifiably Robust Collaborative Inference via Feature Purification »
Jing Liu · Chulin Xie · Sanmi Koyejo · Bo Li -
2022 Poster: One-shot Neural Backdoor Erasing via Adversarial Weight Masking »
Shuwen Chai · Jinghui Chen -
2022 Poster: Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models »
Boxin Wang · Wei Ping · Chaowei Xiao · Peng Xu · Mostofa Patwary · Mohammad Shoeybi · Bo Li · Anima Anandkumar · Bryan Catanzaro -
2022 Poster: SafeBench: A Benchmarking Platform for Safety Evaluation of Autonomous Vehicles »
Chejian Xu · Wenhao Ding · Weijie Lyu · ZUXIN LIU · Shuai Wang · Yihan He · Hanjiang Hu · DING ZHAO · Bo Li -
2022 Poster: General Cutting Planes for Bound-Propagation-Based Neural Network Verification »
Huan Zhang · Shiqi Wang · Kaidi Xu · Linyi Li · Bo Li · Suman Jana · Cho-Jui Hsieh · J. Zico Kolter -
2021 : Career and Life: Panel Discussion - Bo Li, Adriana Romero-Soriano, Devi Parikh, and Emily Denton »
Remi Denton · Devi Parikh · Bo Li · Adriana Romero -
2021 : Live Q&A with Bo Li »
Bo Li -
2021 : Invited talk – Trustworthy Machine Learning via Logic Inference, Bo Li »
Bo Li -
2021 : Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models »
Boxin Wang · Chejian Xu · Shuohang Wang · Zhe Gan · Yu Cheng · Jianfeng Gao · Ahmed Awadallah · Bo Li -
2021 Poster: G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators »
Yunhui Long · Boxin Wang · Zhuolin Yang · Bhavya Kailkhura · Aston Zhang · Carl Gunter · Bo Li -
2021 Poster: Anti-Backdoor Learning: Training Clean Models on Poisoned Data »
Yige Li · Xixiang Lyu · Nodens Koren · Lingjuan Lyu · Bo Li · Xingjun Ma -
2021 Poster: Adversarial Attack Generation Empowered by Min-Max Optimization »
Jingkang Wang · Tianyun Zhang · Sijia Liu · Pin-Yu Chen · Jiacen Xu · Makan Fardad · Bo Li -
2021 : Reconnaissance Blind Chess + Q&A »
Ryan Gardner · Gino Perrotta · Corey Lowman · Casey Richardson · Andrew Newman · Jared Markowitz · Nathan Drenkow · Bart Paulhamus · Ashley J Llorens · Todd Neller · Raman Arora · Bo Li · Mykel J Kochenderfer -
2021 Poster: TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness »
Zhuolin Yang · Linyi Li · Xiaojun Xu · Shiliang Zuo · Qian Chen · Pan Zhou · Benjamin Rubinstein · Ce Zhang · Bo Li -
2020 Workshop: Workshop on Dataset Curation and Security »
Nathalie Baracaldo · Yonatan Bisk · Avrim Blum · Michael Curry · John Dickerson · Micah Goldblum · Tom Goldstein · Bo Li · Avi Schwarzschild -
2020 Poster: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Huan Zhang · Hongge Chen · Chaowei Xiao · Bo Li · Mingyan Liu · Duane Boning · Cho-Jui Hsieh -
2020 Spotlight: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Huan Zhang · Hongge Chen · Chaowei Xiao · Bo Li · Mingyan Liu · Duane Boning · Cho-Jui Hsieh -
2020 Poster: On Convergence of Nearest Neighbor Classifiers over Feature Transformations »
Luka Rimanic · Cedric Renggli · Bo Li · Ce Zhang