Skip to yearly menu bar Skip to main content


Search All 2023 Events
 

29 Results

<<   <   Page 1 of 3   >   >>
Workshop
Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets
Subhajit Dutta Chowdhury · Zhiyu Ni · Qingyuan Peng · Souvik Kundu · Pierluigi Nuzzo
Poster
Wed 8:45 HQA-Attack: Toward High Quality Black-Box Hard-Label Adversarial Attack on Text
Han Liu · Zhi Xu · Xiaotong Zhang · Feng Zhang · Fenglong Ma · Hongyang Chen · Hong Yu · Xianchao Zhang
Poster
Tue 8:45 Blurred-Dilated Method for Adversarial Attacks
Yang Deng · Weibin Wu · Jianping Zhang · Zibin Zheng
Workshop
Backdoor Threats from Compromised Foundation Models to Federated Learning
Xi Li · Songhe Wang · Chen Wu · Hao Zhou · Jiaqi Wang
Workshop
PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks
Ziquan Liu · zhuo zhi · Ilija Bogunovic · Carsten Gerner-Beuerle · Miguel Rodrigues
Workshop
Fri 13:00 Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning
Taejin Kim · Jiarui Li · Nikhil Madaan · Shubhranshu Singh · Carlee Joe-Wong
Workshop
Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game
Sam Toyer · Olivia Watkins · Ethan Mendes · Justin Svegliato · Luke Bailey · Tiffany Wang · Isaac Ong · Karim Elmaaroufi · Pieter Abbeel · Trevor Darrell · Alan Ritter · Stuart J Russell
Poster
Thu 8:45 VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
Ziyi Yin · Muchao Ye · Tianrong Zhang · Tianyu Du · Tianyu Du · Jinguo Zhu · Han Liu · Jinghui Chen · Ting Wang · Fenglong Ma
Poster
Thu 8:45 Content-based Unrestricted Adversarial Attack
Zhaoyu Chen · Bo Li · Shuang Wu · Kaixun Jiang · Shouhong Ding · Wenqiang Zhang
Workshop
AutoDAN: Automatic and Interpretable Adversarial Attacks on Large Language Models
Sicheng Zhu · Ruiyi Zhang · Bang An · Gang Wu · Joe Barrow · Zichao Wang · Furong Huang · Ani Nenkova · Tong Sun
Workshop
Adversarial Attacks on Neuron Interpretation via Activation Maximization
Alex Fulleringer · Geraldin Nanfack · Jonathan Marty · Michael Eickenberg · Eugene Belilovsky
Workshop
Sat 8:30 Adversarial Attacks and Defenses in Large Language Models: Old and New Threats
Leo Schwinn · David Dobre · Stephan Günnemann · Gauthier Gidel