Skip to yearly menu bar Skip to main content


Poster

Sparse Coding for Learning Interpretable Spatio-Temporal Primitives

Taehwan Kim · Greg Shakhnarovich · Raquel Urtasun


Abstract:

Sparse coding has recently become a popular approach in computer vision to learn dictionaries of natural images. In this paper we extend sparse coding to learn interpretable spatio-temporal primitives of human motion. We cast the problem of learning spatio-temporal primitives as a tensor factorization problem and introduce constraints to learn interpretable primitives. In particular, we use group norms over those tensors, diagonal constraints on the activations as well as smoothness constraints that are inherent to human motion.
We demonstrate the effectiveness of our approach to learn interpretable representations of human motion from motion capture data, and show that our approach outperforms recently developed matching pursuit and sparse coding algorithms.

Live content is unavailable. Log in and register to view live content