Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Reinforcement Learning

Strength Through Diversity: Robust Behavior Learning via Mixture Policies

Tim Seyde · Wilko Schwarting · Igor Gilitschenski · Markus Wulfmeier · Daniela Rus


Abstract:

Efficiency in robot learning is highly dependent on hyperparameters. Robot morphology and task structure differ widely and finding the optimal setting typically requires sequential or parallel repetition of experiments, strongly increasing the interaction count. We propose a training method that only relies on a single trial by enabling agents to select and combine controller designs conditioned on the task. Our Hyperparameter Mixture Policies (HMPs) feature diverse sub-policies that vary in distribution types and parameterization, reducing the impact of design choices and unlocking synergies between low-level components. We demonstrate strong performance on the DeepMind Control Suite, Meta-World tasks and a simulated ANYmal robot, showing that HMPs yield robust, data-efficient learning.

Chat is not available.