Skip to yearly menu bar Skip to main content


Spotlight Poster

PrivAuditor: Benchmarking Data Protection Vulnerabilities in LLM Adaptation Techniques

Derui Zhu · Dingfan Chen · Xiongfei Wu · Jiahui Geng · Zhuo Li · Jens Grossklags · Lei Ma

East Exhibit Hall A-C #4303
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Large Language Models (LLMs) are recognized for their potential to be an important building block toward achieving artificial general intelligence due to their unprecedented capability for solving diverse tasks. Despite this, LLMs often underperform in domain-specific tasks without training on relevant domain data, a phenomenon attributed to distribution shifts. This makes adapting pre-trained LLMs with domain-specific data crucial. However, this adaptation raises significant privacy concerns, especially when the data involved come from sensitive domains. In this work, we extensively investigate the privacy vulnerabilities of adapted (fine-tuned) LLMs as well as benchmark privacy leakage across a wide range of data modalities, state-of-the-art privacy attack methods, adaptation techniques, and pre-trained model architectures. We systematically evaluate and pinpoint critical factors related to privacy leakage. With our organized codebase and insights, we aim to provide a standardized auditing tool for practitioners seeking to deploy customized LLM applications with faithful privacy assessments.

Live content is unavailable. Log in and register to view live content