Timezone: »
Many events occur in the world. Some event types are stochastically excited or inhibited—in the sense of having their probabilities elevated or decreased—by patterns in the sequence of previous events. Discovering such patterns can help us predict which type of event will happen next and when. We model streams of discrete events in continuous time, by constructing a neurally self-modulating multivariate point process in which the intensities of multiple event types evolve according to a novel continuous-time LSTM. This generative model allows past events to influence the future in complex and realistic ways, by conditioning future event intensities on the hidden state of a recurrent neural network that has consumed the stream of past events. Our model has desirable qualitative properties. It achieves competitive likelihood and predictive accuracy on real and synthetic datasets, including under missing-data conditions.
Author Information
Hongyuan Mei (JOHNS HOPKINS UNIVERSITY)
I am a final-year Ph.D. student (2016-) in Department of Computer Science at Johns Hopkins University, affiliated with the Center for Language and Speech Processing, where I am advised by Jason Eisner. My research interests are rooted in designing models and algorithms to solve challenging real-life problems, with particular interest in continuous-time modeling and natural language processing.
Jason Eisner (Johns Hopkins University)
Jason Eisner is Professor of Computer Science at Johns Hopkins University, as well as Director of Research at Microsoft Semantic Machines. He is a Fellow of the Association for Computational Linguistics. At Johns Hopkins, he is also affiliated with the Center for Language and Speech Processing, the Machine Learning Group, the Cognitive Science Department, and the national Center of Excellence in Human Language Technology. His goal is to develop the probabilistic modeling, inference, and learning techniques needed for a unified model of all kinds of linguistic structure. His 135+ papers have presented various algorithms for parsing, machine translation, and weighted finite-state machines; formalizations, algorithms, theorems, and empirical results in computational phonology; and unsupervised or semi-supervised learning methods for syntax, morphology, and word-sense disambiguation. He is also the lead designer of Dyna, a new declarative programming language that provides an infrastructure for AI research. He has received two school-wide awards for excellence in teaching, as well as recent Best Paper Awards at ACL 2017 and EMNLP 2019.
More from the Same Authors
-
2020 Poster: Noise-Contrastive Estimation for Multivariate Point Processes »
Hongyuan Mei · Tom Wan · Jason Eisner -
2019 : Panel Discussion »
Jacob Andreas · Edward Gibson · Stefan Lee · Noga Zaslavsky · Jason Eisner · Jürgen Schmidhuber -
2019 : Invited Talk - 3 »
Jason Eisner -
2018 : Panel Discussion »
Rich Caruana · Mike Schuster · Ralf Schlüter · Hynek Hermansky · Renato De Mori · Samy Bengio · Michiel Bacchiani · Jason Eisner -
2018 : Jason Eisner, "BiLSTM-FSTs and Neural FSTs" »
Jason Eisner -
2014 Poster: Learning to Search in Branch and Bound Algorithms »
He He · Hal Daumé III · Jason Eisner -
2012 Poster: Imitation Learning by Coaching »
He He · Hal Daumé III · Jason Eisner -
2012 Poster: Learned Prioritization for Trading Off Accuracy and Speed »
Jiarong Jiang · Adam Teichert · Hal Daumé III · Jason Eisner