Skip to yearly menu bar Skip to main content


Poster

MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies

Xue Bin Peng · Michael Chang · Grace Zhang · Pieter Abbeel · Sergey Levine

East Exhibition Hall B + C #44

Keywords: [ Reinforcement Learning ] [ Reinforcement Learning and Planning -> Hierarchical RL; Reinforcement Learning and Planning ] [ Multitask and Transfer Learning ] [ Algorithms ]


Abstract:

Humans are able to perform a myriad of sophisticated tasks by drawing upon skills acquired through prior experience. For autonomous agents to have this capability, they must be able to extract reusable skills from past experience that can be recombined in new ways for subsequent tasks. Furthermore, when controlling complex high-dimensional morphologies, such as humanoid bodies, tasks often require coordination of multiple skills simultaneously. Learning discrete primitives for every combination of skills quickly becomes prohibitive. Composable primitives that can be recombined to create a large variety of behaviors can be more suitable for modeling this combinatorial explosion. In this work, we propose multiplicative compositional policies (MCP), a method for learning reusable motor skills that can be composed to produce a range of complex behaviors. Our method factorizes an agent's skills into a collection of primitives, where multiple primitives can be activated simultaneously via multiplicative composition. This flexibility allows the primitives to be transferred and recombined to elicit new behaviors as necessary for novel tasks. We demonstrate that MCP is able to extract composable skills for highly complex simulated characters from pre-training tasks, such as motion imitation, and then reuse these skills to solve challenging continuous control tasks, such as dribbling a soccer ball to a goal, and picking up an object and transporting it to a target location.

Live content is unavailable. Log in and register to view live content