Skip to yearly menu bar Skip to main content

Workshop: Generalization in Planning (GenPlan '23)

Contrastive Representations Make Planning Easy

Benjamin Eysenbach · Vivek Myers · Sergey Levine · Russ Salakhutdinov

Keywords: [ time-series ] [ Planning ] [ prediction ] [ contrastive learning ] [ Inference ]


Probabilistic inference over time series data is challenging when observations are high-dimensional. In this paper, we show how inference questions relating to prediction and planning can have compact, closed form solutions in terms of learned representations. The key idea is to apply a variant of contrastive learning to time series data. Prior work already shows that the representations learned by contrastive learning encode a probability ratio. By first extending this analysis to show that the marginal distribution over representations is Gaussian, we can then prove that conditional distribution of future representations is also Gaussian. Taken together, these results show that a variant of temporal contrastive learning results in representations distributed according to a Gaussian Markov chain, a graphical model where inference (e.g., filtering, smoothing) has closed form solutions. For example, in one special case the problem of trajectory inference simply corresponds to linear interpolation of the initial and final state representations. We provide brief empirical results validating our theory.

Chat is not available.