Timezone: »

 
Poster
Multimodal Learning with Deep Boltzmann Machines
Nitish Srivastava · Russ Salakhutdinov

Tue Dec 07:00 PM -- 12:00 AM PST @ Harrah’s Special Events Center 2nd Floor #None

We propose a Deep Boltzmann Machine for learning a generative model of multimodal data. We show how to use the model to extract a meaningful representation of multimodal data. We find that the learned representation is useful for classification and information retreival tasks, and hence conforms to some notion of semantic similarity. The model defines a probability density over the space of multimodal inputs. By sampling from the conditional distributions over each data modality, it possible to create the representation even when some data modalities are missing. Our experimental results on bi-modal data consisting of images and text show that the Multimodal DBM can learn a good generative model of the joint space of image and text inputs that is useful for information retrieval from both unimodal and multimodal queries. We further demonstrate that our model can significantly outperform SVMs and LDA on discriminative tasks. Finally, we compare our model to other deep learning methods, including autoencoders and deep belief networks, and show that it achieves significant gains.

Author Information

Nitish Srivastava (Apple Inc)
Russ Salakhutdinov (Carnegie Mellon University)

More from the Same Authors