Timezone: »

 
Human Evaluation of Text-to-Image Models on a Multi-Task Benchmark
Vitali Petsiuk · Alexander E. Siemenn · Saisamrit Surbehera · Qi Qi Chin · Keith Tyser · Gregory Hunter · Arvind Raghavan · Yann Hicke · Bryan Plummer · Ori Kerret · Tonio Buonassisi · Kate Saenko · Armando Solar-Lezama · Iddo Drori

Sat Dec 03 11:50 AM -- 12:00 PM (PST) @
Event URL: https://openreview.net/forum?id=_kHnBptAgGq »

We provide a new multi-task benchmark for evaluating text-to-image models and perform a human evaluation comparing two of the most common open source (Stable Diffusion) and commercial (DALL-E 2) models. Twenty computer science AI graduate students evaluated the two models, on three tasks, at three difficulty levels, across ten prompts each, providing 3,600 ratings. Text-to-image generation has seen rapid progress to the point that many recent models have demonstrated their ability to create realistic high-resolution images for various prompts. However, current text-to-image methods and the broader body of research in vision-language understanding still struggle with intricate text prompts that contain many objects with multiple attributes and relationships. We introduce a new text-to-image benchmark that contains a suite of fifty tasks and applications that capture a model’s ability to handle different features of a text prompt. For example, asking a model to generate a varying number of the same object to measure its ability to count or providing a text prompt with several objects that each have a different attribute to correctly identify its ability to match objects and attributes. Rather than subjectively evaluating text-to-image results on a set of prompts, our new multi-task benchmark consists of challenge tasks at three difficulty levels (easy, medium, and hard) along with human ratings for each generated image.

Author Information

Vitali Petsiuk (Boston University)
Vitali Petsiuk

I am a PhD candidate in Computer Science working in the Image and Video Computing group at Boston University, advised by Professor Kate Saenko. My research lies in the field of Explainable AI for Computer Vision and Natural Language Processing models. During my research internships at Adobe, I have worked on developing novel methods for making CV and NLP models more interpretable with applications in Document Understanding. Prior to joining Boston University I have recieved my M.S.,B.S. degree in Computer Science and Applied Mathematics at Belarusian State University. During my studies there I have been doing research on Graph Theory and Semantic Segmentation for 2D and 3D Lung Imaging.

Alexander E. Siemenn (Massachusetts Institute of Technology)
Saisamrit Surbehera (Columbia University)
Qi Qi Chin (Harvard University)
Keith Tyser (Boston University, MIT Lincoln Laboratory)
Gregory Hunter (Columbia University)
Arvind Raghavan
Yann Hicke (Cornell University)
Bryan Plummer (Boston University)
Ori Kerret
Tonio Buonassisi (Massachusetts Institute of Technology)
Kate Saenko (Boston University & MIT-IBM Watson AI Lab, IBM Research)
Armando Solar-Lezama (MIT)
Iddo Drori (BU, MIT, Columbia University)

More from the Same Authors