Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Safe and Robust Control of Uncertain Systems

Learning Contraction Policies from Offline Data

Navid Rezazadeh · Negar Mehr


Abstract:

We propose a data-driven control policy design from an offline data set. Contraction theory enables constructing a policy-learning framework that makes the closed-loop system trajectories inherently convergent towards a unique trajectory. At the technical level, identifying the contraction metric, which is the distance metric with respect to which a robot's trajectories exhibit contraction is often non-trivial. We propose to jointly learn the control policy and its corresponding contraction metric from offline data. To achieve this, we learn the robot dynamical model from an offline data set consisting of the robot's state and input trajectories. Using this learned dynamical model, we propose a data augmentation algorithm for learning contraction policies. We evaluate the performance of our proposed framework on simulated robotic goal-reaching tasks and demonstrate that enforcing contraction results in faster convergence.