Timezone: »
Kernelized Stein discrepancy (KSD) is a score-based discrepancy widely employed in goodness-of-fit tests. It can be used even when the target distribution has an unknown normalising factor, such as in Bayesian analysis. We show theoretically and empirically that the power of the KSD test can be low when the target distribution has well-separated modes, which is due to insufficient data in regions where the score functions of the alternative and the target distributions differ the most. To improve its test power, we propose to perturb the target and alternative distributions before applying the KSD test. The perturbation uses a Markov transition kernel that leaves the target invariant but perturbs alternatives. We provide numerical evidence that the proposed approach can lead to a substantially higher power than the KSD test when the target and the alternative are mixture distributions that differ only in mixing weights.
Author Information
Xing Liu (Imperial College London)
Andrew Duncan (Imperial College London)
Axel Gandy (Department of Mathematics, Imperial College London)
More from the Same Authors
-
2020 : Probabilistic Adjoint Sensitivity Analysis for Fast Calibration of Partial Differential Equation Models »
Jonathan Cockayne · Andrew Duncan -
2020 : Bayesian polynomial chaos »
Pranay Seshadri · Andrew Duncan · Ashley Scillitoe -
2022 Poster: Joint Entropy Search for Multi-Objective Bayesian Optimization »
Ben Tu · Axel Gandy · Nikolas Kantas · Behrang Shafei -
2020 Poster: Bayesian Probabilistic Numerical Integration with Tree-Based Models »
Harrison Zhu · Xing Liu · Ruya Kang · Zhichao Shen · Seth Flaxman · Francois-Xavier Briol -
2019 Poster: Minimum Stein Discrepancy Estimators »
Alessandro Barp · Francois-Xavier Briol · Andrew Duncan · Mark Girolami · Lester Mackey