Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Instruction Tuning and Instruction Following

Invited Talk 3 – Fei Xia

[ ]
Fri 15 Dec 8:15 a.m. PST — 8:45 a.m. PST

Abstract:

Title: Towards Instruction Following Robots

Abstract: This talk focuses on the integration of instruction-following language models in the field of robotics, leveraging two novel concepts: Affordance and Language to Reward (L2R). Affordance, as proposed in existing literature, provides a framework for robots to understand and interact with their environment in a meaningful way. It is defined as the potential actions that an environment enables for an agent, thereby granting robots the ability to execute tasks in various contexts. This concept allows robots to generate plans that are grounded in their environments. On the other hand, Language to Reward (L2R), proposes a new way to use language models in robotics zero-shot.. L2R utilizes reward functions as a flexible interface, bridging the gap between abstract language instructions and specific, actionable tasks for robots. Through this method, language models can define reward parameters, which are then optimized to direct robot actions effectively. The use of a real-time optimizer, such as MuJoCo MPC, enhances this process by allowing for an interactive and dynamic experience. Users can instantly see the outcomes of their instructions, providing immediate feedback that can be used to modify and improve the robot's behavior.

Bio: Fei Xia is a senior research scientist at Google DeepMind, focusing on the field of robotics. His work involves building intelligent agents capable of interacting with complex, unstructured real-world environments, with applications in home robotics. Recently his work centers around foundation models for robotics: This involves using large language models (LLMs) to learn general-purpose skills that can be applied to a variety of robotic tasks.

Chat is not available.