Skip to yearly menu bar Skip to main content


Poster

DetectEval: Benchmarking LLM-Generated Text Detection in Real-World Scenarios

Junchao Wu · Runzhe Zhan · Derek Wong · Shu Yang · Xinyi Yang · Yulin Yuan · Lidia Chao

West Ballroom A-D #5207
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Recent research has introduced the critical task of detecting text generated by large language models (LLMs). With zero-shot methods like DetectGPT, detection capabilities have reached impressive levels. However, the reliability of existing detectors in real-world applications remains underexplored. In this study, we present a new benchmark, DetectEval, to highlights that even state-of-the-art (SOTA) techniques still face challenges with this task. We curated datasets from domains more susceptible to abuse using commonly used LLMs to create data that more closely aligns with practical needs and real-world applications. Unlike previous studies, we employed heuristic rules to generate adversarial LLM-generated text, simulating advanced prompt usage, human revisions like word substitutions, and writing errors. Our construction of DetectEval and the challenges it poses reveal the inner workings and vulnerabilities of current SOTA detectors. More Importantly, we analyzed the potential impact of writing styles, model types, attack methods, training-time and test-time text lengths and attacked human-written texts on different types of detectors, providing valuable insights. We believe DetectEval could serve as an effective benchmark for assessing detectors in real-world scenarios, evolving with the advanced attack methods, thus posing more formidable challenges.

Live content is unavailable. Log in and register to view live content