Skip to yearly menu bar Skip to main content


Poster

Analytically Computing Partial Information Decomposition

Chaitanya Goswami · Amanda Merkley


Abstract:

Bivariate partial information decomposition (PID) has emerged as a promising tool for analyzing interactions in complex systems, particularly in neuroscience. PID achieves this by decomposing the information that two sources (e.g., different brain regions) have about a target (e.g., a stimulus) into unique, redundant, and synergistic terms. However, the computation of PID remains a challenging problem, often involving optimization over distributions. While several works have been proposed to compute PID terms numerically, there is a surprising dearth of work on computing PID terms analytically. The only known analytical PID result is for jointly Gaussian distributions. In this work, we present two theoretical advances that enable analytical calculation of the PID terms for numerous well-known distributions, including distributions relevant to neuroscience, such as Poisson, Cauchy, and binomial. Our first result generalizes the analytical Gaussian PID result to the much larger class of stable distributions. We also discover a theoretical link between PID and the emerging fields of data thinning and data fission. Our second result utilizes this link to derive analytical PID terms for two more classes of distributions: convolution-closed distributions and a sub-class of the exponential family. Furthermore, we provide an analytical upper bound for approximately calculating PID for convolution-closed distributions, whose tightness we demonstrate in simulation.

Live content is unavailable. Log in and register to view live content