We study the relationship between prediction, explanation, and control in artificial predictive minds''---modeled as Long Short-Term Memory (LSTM) neural networks---that interact with simple dynamical systems. We show how to operationalize key philosophical concepts, and model associated cognitive biases. Our results reveal, in turn, an unexpectedly complex relationship between prediction, explanation, and control. In many cases,
predictive minds'' can be better at explanation and control than they are at prediction itself, a result that holds in the presence of heuristics expected under computational resource constraints.