There may exist multiple models that perform almost equally well for any given prediction task. Related to the role of counterfactuals in studies of discrimination, we examine how individual predictions vary among these alternative competing models. In particular, we study predictive multiplicity -- in probabilistic classification. We formally define measures for our setting and develop optimization-based methods to compute these measures. We demonstrate how multiplicity can disproportionately impact marginalized individuals. And we apply our methodology to gain insight into why predictive multiplicity arises. Given our results, future work could explore how multiplicity relates to causal fairness.