Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundation Models for Decision Making

Fast Imitation via Behavior Foundation Models

Matteo Pirotta · Andrea Tirinzoni · Ahmed Touati · Alessandro Lazaric · Yann Ollivier

[ ] [ Project Page ]
 
presentation: Foundation Models for Decision Making
Fri 15 Dec 6:15 a.m. PST — 3:30 p.m. PST

Abstract:

Imitation learning (IL) aims at producing agents that can imitate anybehavior given a few expert demonstrations. Yet existing approaches requiremany demonstrations and/or running (online or offline) reinforcementlearning (RL) algorithms for each new imitation task. Here we show that recent RL foundation models based on successor measures canimitate any expert behavior almost instantlywith just a few demonstrations and no need for RL or fine-tuning, whileaccommodating several ILprinciples (behavioral cloning, feature matching, reward-based, andgoal-based reductions).In ourexperiments, imitation via RL foundation models matches, and oftensurpasses, the performance of SOTA offline IL algorithms, and producesimitation policies from new demonstrations within seconds instead of hours.

Chat is not available.