Timezone: »
Continuous hidden Markov models (HMMs) assume that observations are generated from a mixture of Gaussian densities, limiting their ability to model more complex distributions. In this work, we address this shortcoming and propose novel continuous HMM models, dubbed FlowHMMs, that enable learning general continuous observation densities without constraining them to follow a Gaussian distribution or their mixtures. To that end, we leverage deep flow-based architectures that model complex, non-Gaussian functions and propose two variants of training a FlowHMM model. The first one, based on gradient-based technique, can be applied directly to continuous multidimensional data, yet its application to larger data sequences remains computationally expensive. Therefore, we also present a second approach to training our FlowHMM that relies on the co-occurrence matrix of discretized observations and considers the joint distribution of pairs of co-observed values, hence rendering the training time independent of the training sequence length. As a result, we obtain a model that can be flexibly adapted to the characteristics and dimensionality of the data. We perform a variety of experiments in which we compare both training strategies with a baseline of Gaussian mixture models. We show, that in terms of quality of the recovered probability distribution, accuracy of prediction of hidden states, and likelihood of unseen data, our approach outperforms the standard Gaussian methods.
Author Information
Pawel Lorek (University of Wrocław, Tooploox)
Rafal Nowak (Univeristy of Wroclaw, Tooploox)
Tomasz Trzcinski (Warsaw University of Technology, Tooploox, IDEAS, Jagiellonian University)
Maciej Zieba (Tooploox, Wroclaw University of Science and Technology)
More from the Same Authors
-
2022 : Diversity Balancing Generative Adversarial Networks for fast simulation of the Zero Degree Calorimeter in the ALICE experiment at CERN »
Jan Dubiński · Kamil Deja · Sandro Wenzel · Przemysław Rokita · Tomasz Trzcinski -
2022 Poster: On Analyzing Generative and Denoising Capabilities of Diffusion-based Deep Generative Models »
Kamil Deja · Anna Kuzina · Tomasz Trzcinski · Jakub Tomczak -
2021 Poster: Non-Gaussian Gaussian Processes for Few-Shot Regression »
Marcin Sendera · Jacek Tabor · Aleksandra Nowak · Andrzej Bedychaj · Massimiliano Patacchiola · Tomasz Trzcinski · Przemysław Spurek · Maciej Zieba -
2020 Poster: UCSG-NET- Unsupervised Discovering of Constructive Solid Geometry Tree »
Kacper Kania · Maciej Zieba · Tomasz Kajdanowicz -
2019 : Coffee Break & Poster Session 1 »
Yan Zhang · Jonathon Hare · Adam Prugel-Bennett · Po Leung · Patrick Flaherty · Pitchaya Wiratchotisatian · Alessandro Epasto · Silvio Lattanzi · Sergei Vassilvitskii · Morteza Zadimoghaddam · Theja Tulabandhula · Fabian Fuchs · Adam Kosiorek · Ingmar Posner · William Hang · Anna Goldie · Sujith Ravi · Azalia Mirhoseini · Yuwen Xiong · Mengye Ren · Renjie Liao · Raquel Urtasun · Haici Zhang · Michele Borassi · Shengda Luo · Andrew Trapp · Geoffroy Dubourg-Felonneau · Yasmeen Kussad · Christopher Bender · Manzil Zaheer · Junier Oliva · Michał Stypułkowski · Maciej Zieba · Austin Dill · Chun-Liang Li · Songwei Ge · Eunsu Kang · Oiwi Parker Jones · Kelvin Ka Wing Wong · Joshua Payne · Yang Li · Azade Nazi · Erkut Erdem · Aykut Erdem · Kevin O'Connor · Juan J Garcia · Maciej Zamorski · Jan Chorowski · Deeksha Sinha · Harry Clifford · John W Cassidy -
2018 Poster: BinGAN: Learning Compact Binary Descriptors with a Regularized GAN »
Maciej Zieba · Piotr Semberecki · Tarek El-Gaaly · Tomasz Trzcinski