Specifically in the FAccT literature, algorithmic bias tends to be characterized as a problem in its consequences rather than as evidence of the underlying societal and technical conditions that (re)produce it. In this context, explainability (XAI) tools are proposed as a solution to gauge these conditions (e.g. SHAP and LIME as well as libraries such as What If or IBM360). While relevant, these tools tend to approach these conditions unrealistically; as static, cumulative and in terms of their causal import. Differently, I here propose that these tools be informed by a genealogical approach to bias. Following the tradition of Nietzsche and Foucault, a genealogy is “a form of historical critique, designed to overturn our norms by revealing their origins” (Hill, 2016, p.1). In this case, I understand genealogy as a form of epistemic critique, designed to understand algorithmic bias in its consequences by focusing on the conditions for its possibility. In this respect, I propose to question XAI tools as much as to use them as questions, rather than as replies to the problem of bias as skewed performance. This work puts forward two proposals. First, we propose a framework to index XAI tools according to their relevance for bias as evidence. We identify feature importance methods (e.g. SHAP) and rule-list methods as relevant for procedural fairness, while we identify counterfactual methods as relevant to a) agency, in terms of suggesting what can be changed to affect an outcome and b) building a prima facie case for discrimination. Second, we propose a rubric of questions to test these tools in their abilities to detect so-called “bias-shifts”. Overall, the aim is to think about XAI approaches not as mere technical tools but as questions on skewed performance for evidence gathering with fairness implications.