Skip to yearly menu bar Skip to main content


Poster

VLMimic: Vision Language Models are Visual Imitation Learner for Fine-grained Actions

Guangyan Chen · Meiling Wang · Te Cui · Yao Mu · Haoyang Lu · Tianxing Zhou · Zicai Peng · Mengxiao Hu · Haizhou Li · Li Yuan · Yi Yang · Yufeng Yue

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Visual imitation learning (VIL) provides an efficient and intuitive strategy for robotic systems to acquire novel skills. Recent advancements in Vision Language Models (VLMs) have demonstrated remarkable performance in vision and language reasoning capabilities for VIL tasks. Despite the progress, current VIL methods naively employ VLMs to learn high-level plans from human videos, relying on pre-defined motion primitives for executing physical interactions, which remains a major bottleneck. In this work, we present VLMimic, a novel paradigm that harnesses VLMs to directly learn even fine-grained action levels, only given a limited number of human videos. Specifically, VLMimic first grounds object-centric movements from human videos, and learns skills using hierarchical constraint representations, facilitating the derivation of skills with fine-grained action levels from limited human videos. These skills are refined and updated through an iterative comparison strategy, enabling efficient adaptation to unseen environments. Our extensive experiments exhibit that our VLMimic, using only 5 human videos, yields significant improvements of over 27% and 21% in RLBench and real-world manipulation tasks, and surpasses baselines by more than 37% in long-horizon tasks. Code and videos are available on our anonymous homepage.

Live content is unavailable. Log in and register to view live content