Timezone: »

 
Poster
FETA: Towards Specializing Foundational Models for Expert Task Applications
Amit Alfassy · Assaf Arbelle · Oshri Halimi · Sivan Harary · Roei Herzig · Eli Schwartz · Rameswar Panda · Michele Dolfi · Christoph Auer · Peter Staar · Kate Saenko · Rogerio Feris · Leonid Karlinsky

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #1016
Foundational Models (FMs) have demonstrated unprecedented capabilities including zero-shot learning, high fidelity data synthesis, and out of domain generalization. However, the parameter capacity of FMs is still limited, leading to poor out-of-the-box performance of FMs on many expert tasks (e.g. retrieval of car manuals technical illustrations from language queries), data for which is either unseen or belonging to a long-tail part of the data distribution of the huge datasets used for FM pre-training. This underlines the necessity to explicitly evaluate and finetune FMs on such expert tasks, arguably ones that appear the most in practical real-world applications. In this paper, we propose a first of its kind FETA benchmark built around the task of teaching FMs to understand technical documentation, via learning to match their graphical illustrations to corresponding language descriptions. Our FETA benchmark focuses on text-to-image and image-to-text retrieval in public car manuals and sales catalogue brochures. FETA is equipped with a procedure for completely automatic annotation extraction (code would be released upon acceptance), allowing easy extension of FETA to more documentation types and application domains in the future. Our automatic annotation leads to an automated performance metric shown to be consistent with metrics computed on human-curated annotations (also released). We provide multiple baselines and analysis of popular FMs on FETA leading to several interesting findings that we believe would be very valuable to the FM community, paving the way towards real-world application of FMs for many practical expert tasks currently being `overlooked' by standard benchmarks focusing on common objects.

Author Information

Amit Alfassy (Technion, IBM Research)
Assaf Arbelle (International Business Machines)
Oshri Halimi (Technion, Technion)
Sivan Harary (IBM-Research)
Roei Herzig (Tel Aviv University)
Eli Schwartz (IBM Research AI)
Rameswar Panda (MIT-IBM Watson AI Lab)
Michele Dolfi (IBM Research Europe)
Christoph Auer (International Business Machines)
Peter Staar (IBM Research)
Kate Saenko (Boston University & MIT-IBM Watson AI Lab, IBM Research)
Rogerio Feris (MIT-IBM Watson AI Lab, IBM Research)
Leonid Karlinsky (Weizmann Institute of Science)

More from the Same Authors