Skip to yearly menu bar Skip to main content


Poster

Quantifying the Bitter Lesson: How Safety Benchmarks Measure Capabilities Instead of Safety

Richard Ren · Steven Basart · Adam Khoja · Alexander Pan · Alice Gatti · Long Phan · Xuwang Yin · Mantas Mazeika · Gabriel Mukobi · Ryan Kim · Stephen Fitz · Dan Hendrycks

West Ballroom A-D #5207
[ ] [ Project Page ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Performance on popular ML benchmarks is highly correlated with model scale, suggesting that most benchmarks tend to measure a similar underlying factor of general model capabilities. However, substantial research effort remains devoted to designing new benchmarks, many of which claim to measure novel phenomena. In the spirit of the Bitter Lesson, we ask whether such effort is wasteful. To quantify this question, we leverage spectral analysis to measure an underlying capabilities component, the direction in benchmark-performance-space which explains most variation in model performance. In an extensive analysis of existing safety benchmarks, we find that variance in model performance on many safety benchmarks is largely explained by the capabilities component. In response, we argue that safety research should prioritize metrics which are not highly correlated with scale. Our work provides a lens to analyze both novel safety benchmarks andnovel safety methods, which we hope will enable future work to make differential progress on safety.

Live content is unavailable. Log in and register to view live content