Poster
Marginalized Hamiltonian Monte Carlo for Linear Mixed-Effects Models
Jinlin Lai · Daniel Sheldon · Justin Domke
East Exhibit Hall A-C #4001
Bayesian reasoning in linear mixed-effects models (LMMs) is challenging and often requires advanced sampling techniques like Markov chain Monte Carlo (MCMC). A common approach is to write the model in a probabilistic programming language and then sample via Hamiltonian Monte Carlo (HMC). However, there are many ways a user can transform a model that make HMC more or less efficient.In particular, marginalizing some variables can greatly improve HMC but is difficult for users to do manually. We develop an algorithm to easily marginalize random effects in LMMs. A naive approach introduces cubic time operations within HMC, but we reduce the running time to linear using fast linear algebra techniques. We show that marginalization is always beneficial when applicable and highlight improvements in various models, especially ones from cognitive sciences.
Live content is unavailable. Log in and register to view live content