Poster
Q-LDA: Uncovering Latent Patterns in Text-based Sequential Decision Processes
Jianshu Chen · Chong Wang · Lin Xiao · Ji He · Lihong Li · Li Deng
Pacific Ballroom #192
Keywords: [ Model-Based RL ] [ Topic Models ]
In sequential decision making, it is often important and useful for end users to understand the underlying patterns or causes that lead to the corresponding decisions. However, typical deep reinforcement learning algorithms seldom provide such information due to their black-box nature. In this paper, we present a probabilistic model, Q-LDA, to uncover latent patterns in text-based sequential decision processes. The model can be understood as a variant of latent topic models that are tailored to maximize total rewards; we further draw an interesting connection between an approximate maximum-likelihood estimation of Q-LDA and the celebrated Q-learning algorithm. We demonstrate in the text-game domain that our proposed method not only provides a viable mechanism to uncover latent patterns in decision processes, but also obtains state-of-the-art rewards in these games.
Live content is unavailable. Log in and register to view live content