Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning Safety

Aligning Robot Representations with Humans

Andreea Bobu · Andi Peng · Pulkit Agrawal · Julie A Shah · Anca Dragan


Abstract:

As robots are increasingly deployed in real-world environments, a key question becomes how to best teach them to accomplish tasks that humans want. In this work, we argue that current robot learning approaches suffer from representation misalignment, where the robot's learned task representation does not capture the human's true representation. We propose that because humans will be the ultimate evaluator of task performance in the world, it is crucial that we explicitly focus our efforts on aligning robot representations with humans, in addition to learning the downstream task. We advocate that current representation learning approaches in robotics can be studied under a single unifying formalism: the representation alignment problem. We mathematically operationalize this problem, define its key desiderata, and situate current robot learning methods within this formalism.

Chat is not available.