Skip to yearly menu bar Skip to main content


Competition

The NeurIPS 2024 LLM Privacy Challenge

Qinbin Li · Junyuan Hong · Chulin Xie · Junyi Hou · Yiqun Diao · Zhun Wang · Dan Hendrycks · Zhangyang "Atlas" Wang · Bo Li · Bingsheng He · Dawn Song

Virtual Only
[ ]
Sun 15 Dec 9 a.m. PST — noon PST

Abstract:

The NeurIPS 2024 LLM Privacy Challenge is designed to address the critical issue of privacy in the use of Large Language Models (LLMs), which have become fundamental in a wide array of artificial intelligence applications. This competition acknowledges the potential privacy risks posed by the extensive datasets used to train these models, including the inadvertent leakage of sensitive information. To mitigate these risks, the challenge is structured around two main tracks: the Red Team, focusing on identifying and exploiting privacy vulnerabilities, and the Blue Team, dedicated to developing defenses against such vulnerabilities. Participants will have the option to work with LLMs fine-tuned on synthetic private data or LLMs interacting with private system/user prompts, thus offering a versatile approach to tackling privacy concerns. The competition will provide participants with access to a toolkit designed to facilitate the development of privacy-enhancing methods, alongside baselines for comparison. Submissions will be evaluated based on attack accuracy, efficiency, and the effectiveness of defensive strategies, with prizes awarded to the most innovative and impactful contributions. By fostering a collaborative environment for exploring privacy-preserving techniques, the NeurIPS 2024 LLM Privacy Challenge aims to catalyze advancements in the secure and ethical deployment of LLMs, ensuring their continued utility in sensitive applications without compromising user privacy.

Live content is unavailable. Log in and register to view live content

Schedule