Skip to yearly menu bar Skip to main content


Poster

ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation

Jiazheng Xu · Xiao Liu · Yuchen Wu · Yuxuan Tong · Qinkai Li · Ming Ding · Jie Tang · Yuxiao Dong

Great Hall & Hall B1+B2 (level 1) #609
[ ] [ Project Page ]
[ Paper [ Slides [ Poster [ OpenReview
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract:

We present a comprehensive solution to learn and improve text-to-image models from human preference feedback.To begin with, we build ImageReward---the first general-purpose text-to-image human preference reward model---to effectively encode human preferences.Its training is based on our systematic annotation pipeline including rating and ranking, which collects 137k expert comparisons to date.In human evaluation, ImageReward outperforms existing scoring models and metrics, making it a promising automatic metric for evaluating text-to-image synthesis.On top of it, we propose Reward Feedback Learning (ReFL), a direct tuning algorithm to optimize diffusion models against a scorer.Both automatic and human evaluation support ReFL's advantages over compared methods.All code and datasets are provided at \url{https://github.com/THUDM/ImageReward}.

Chat is not available.