Timezone: »
Poster
Self-Adaptable Point Processes with Nonparametric Time Decays
Zhimeng Pan · Zheng Wang · Jeff M Phillips · Shandian Zhe
Many applications involve multi-type event data. Understanding the complex influences of the events on each other is critical to discover useful knowledge and to predict future events and their types. Existing methods either ignore or partially account for these influences. Recent works use recurrent neural networks to model the event rate. While being highly expressive, they couple all the temporal dependencies in a black-box and can hardly extract meaningful knowledge. More important, most methods assume an exponential time decay of the influence strength, which is over-simplified and can miss many important strength varying patterns. To overcome these limitations, we propose SPRITE, a $\underline{S}$elf-adaptable $\underline{P}$oint p$\underline{R}$ocess w$\underline{I}$th nonparametric $\underline{T}$ime d$\underline{E}$cays, which can decouple the influences between every pair of the events and capture various time decays of the influence strengths. Specifically, we use an embedding to represent each event type and model the event influence as an unknown function of the embeddings and time span. We derive a general construction that can cover all possible time decaying functions. By placing Gaussian process (GP) priors over the latent functions and using Gauss-Legendre quadrature to obtain the integral in the construction, we can flexibly estimate all kinds of time-decaying influences, without restricting to any specific form or imposing derivative constraints that bring learning difficulties. We then use weight space augmentation of GPs to develop an efficient stochastic variational learning algorithm. We show the advantages of our approach in both the ablation study and real-world applications.
Author Information
Zhimeng Pan (University of Utah)
Zheng Wang (University of Utah)
Jeff M Phillips (University of Utah)
Shandian Zhe (University of Utah)
More from the Same Authors
-
2022 Spotlight: Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm »
Aidan Good · Jiaqi Lin · Xin Yu · Hannah Sieg · Mikey Fergurson · Shandian Zhe · Jerzy Wieczorek · Thiago Serra -
2022 Poster: Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm »
Aidan Good · Jiaqi Lin · Xin Yu · Hannah Sieg · Mikey Fergurson · Shandian Zhe · Jerzy Wieczorek · Thiago Serra -
2022 Poster: Infinite-Fidelity Coregionalization for Physical Simulation »
Shibo Li · Zheng Wang · Robert Kirby · Shandian Zhe -
2022 Poster: Batch Multi-Fidelity Active Learning with Budget Constraints »
Shibo Li · Jeff M Phillips · Xin Yu · Robert Kirby · Shandian Zhe -
2021 : An Interactive Visual Demo of Bias Mitigation Techniques for Word Representations »
Archit Rathore · Sunipa Dev · Vivek Srikumar · Jeff M Phillips · Yan Zheng · Michael Yeh · Junpeng Wang · Wei Zhang · Bei Wang -
2021 Poster: Characterizing possible failure modes in physics-informed neural networks »
Aditi Krishnapriyan · Amir Gholami · Shandian Zhe · Robert Kirby · Michael Mahoney -
2021 Poster: Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks »
Shibo Li · Robert Kirby · Shandian Zhe -
2020 Poster: Multi-Fidelity Bayesian Optimization via Deep Neural Networks »
Shibo Li · Wei Xing · Robert Kirby · Shandian Zhe -
2018 Poster: Stochastic Nonparametric Event-Tensor Decomposition »
Shandian Zhe · Yishuai Du -
2018 Spotlight: Stochastic Nonparametric Event-Tensor Decomposition »
Shandian Zhe · Yishuai Du -
2016 Poster: The Robustness of Estimator Composition »
Pingfan Tang · Jeff M Phillips