Skip to yearly menu bar Skip to main content


Poster

The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better

Scott Geng · Cheng-Yu Hsieh · Vivek Ramanujan · Matthew Wallingford · Chun-Liang Li · Ranjay Krishna · Pang Wei Koh

East Exhibit Hall A-C #1602
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Generative text-to-image models enable us to synthesize unlimited amounts of images in a controllable manner, spurring many recent efforts to train vision models with synthetic data. However, every synthetic image ultimately originates from the upstream data used to train the generator. What value does the intermediate generator add over directly training on relevant parts of the upstream data? Grounding this question in the setting of task adaptation, we compare training on task-relevant, targeted synthetic data generated by Stable Diffusion---a generative model trained on the LAION-2B dataset---against training on targeted real images retrieved directly from LAION-2B. We show that while synthetic data can benefit some downstream tasks, it is universally matched or outperformed by real data from our simple retrieval baseline. Our analysis suggests that this underperformance is partially due to high-frequency generator artifacts and inaccurate task-relevant visual details in the synthetic images. Overall, we argue that retrieval is a critical baseline to consider when training with synthetic data---a baseline that current methods do not yet surpass.

Live content is unavailable. Log in and register to view live content