Skip to yearly menu bar Skip to main content


Backprop KF: Learning Discriminative Deterministic State Estimators

Tuomas Haarnoja · Anurag Ajay · Sergey Levine · Pieter Abbeel

Area 5+6+7+8 #119

Keywords: [ (Other) Robotics and Control ] [ (Other) Machine Learning Topics ] [ (Other) Probabilistic Models and Methods ] [ Deep Learning or Neural Networks ]


Generative state estimators based on probabilistic filters and smoothers are one of the most popular classes of state estimators for robots and autonomous vehicles. However, generative models have limited capacity to handle rich sensory observations, such as camera images, since they must model the entire distribution over sensor readings. Discriminative models do not suffer from this limitation, but are typically more complex to train as latent variable models for state estimation. We present an alternative approach where the parameters of the latent state distribution are directly optimized as a deterministic computation graph, resulting in a simple and effective gradient descent algorithm for training discriminative state estimators. We show that this procedure can be used to train state estimators that use complex input, such as raw camera images, which must be processed using expressive nonlinear function approximators such as convolutional neural networks. Our model can be viewed as a type of recurrent neural network, and the connection to probabilistic filtering allows us to design a network architecture that is particularly well suited for state estimation. We evaluate our approach on synthetic tracking task with raw image inputs and on the visual odometry task in the KITTI dataset. The results show significant improvement over both standard generative approaches and regular recurrent neural networks.

Live content is unavailable. Log in and register to view live content