Skip to yearly menu bar Skip to main content


Poster

ALOHA: from Attention to Likes – a unified mOdel for understanding HumAn responses to diverse visual content

Peizhao Li · Junfeng He · Gang Li · Rachit Bhargava · Shaolei Shen · Nachiappan Valliappan · Youwei Liang · Hongxiang Gu · Venky Ramachandran · Golnaz farhadi · Yang Li · Kai Kohlhoff · Vidhya Navalpakkam


Abstract:

Progress in human behavior modeling involves understanding both implicit, early-stage perceptual behavior such as human attention and explicit, later-stage behavior such as subjective ratings/likes/preferences. Yet, most prior research has focused on modeling implicit and explicit human behavior in isolation; and often limited to a specific type of visual content. Can we build a unified model of human attention and preference behavior that reliably works across diverse types of visual content? Such a model would enable predicting subjective feedback such as overall satisfaction or aesthetic quality ratings, along with the underlying human attention or interaction heatmaps and viewing order, enabling designers and content-creation models to optimize their creation for human-centric improvements. In this paper, we propose ALOHA -- a unified model for understanding human responses from attention to likes, across diverse visual content. ALOHA leverages a multimodal transformer, featuring distinct prediction heads for each facet, and predicts human responses such as attention heatmaps, scanpath or viewing order, as well as subjective rating/preference. We train ALOHA on diverse public datasets spanning natural images, web pages and graphic designs, and achieve SOTA performance on multiple benchmarks across different image domains and various behavior modeling tasks. Potential applications include providing instant feedback on the effectiveness of UIs/digital designs/images, and serving as a reward model to further optimize design/visual-content creation.

Live content is unavailable. Log in and register to view live content