Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

8 Results

<<   <   Page 1 of 1   >>   >
Workshop
vTune: Verifiable Fine-Tuning Through Backdooring
Eva Zhang · Akilesh Potti · Micah Goldblum
Poster
Thu 16:30 PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Omead Pooladzandi · Sunay Bhat · Jeffrey Jiang · Alexander Branch · Gregory Pottie
Workshop
Poster: Leveraging Large Language Models for Zero-Shot Detection and Mitigation of Data Poisoning in Wearable AI Systems
Malithi Mithsara Wanniarachchi Kankanamge · Abdur Shahid · Ning Yang
Poster
Wed 11:00 From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models
Zhuoshi Pan · Yuguang Yao · Gaowen Liu · Bingquan Shen · H. Vicky Zhao · Ramana Kompella · Sijia Liu
Poster
Wed 11:00 Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models
Yuancheng Xu · Jiarui Yao · Manli Shu · Yanchao Sun · Zichu Wu · Ning Yu · Tom Goldstein · Furong Huang
Workshop
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Lukas Gosch · Mahalakshmi Sabanayagam · Debarghya Ghoshdastidar · Stephan Günnemann
Workshop
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models
Michael-Andrei Panaitescu-Liess · Pankayaraj Pathmanathan · Yigitcan Kaya · Zora Che · Bang An · Sicheng Zhu · Aakriti Agrawal · Furong Huang
Workshop
Mitigating Downstream Model Risks via Model Provenance
Keyu Wang · Scott Schaffter · Abdullah Norozi Iranzad · Doina Precup · Jonathan Lebensold · Megan Risdal