Building causal graphs can be a laborious process. To ensure all relevant variables have been captured, researchers often have to discuss with clinicians and experts while also reviewing extensive relevant medical literature. By encoding common and medical knowledge, large language models (LLMs) represent an opportunity to ease this process by automatically scoring edges (i.e., connections between two variables) in potential graphs. LLMs however have been shown to be brittle to the choice of probing words, context, and prompt that the user employs. In this work, we evaluate if LLMs can be a useful tool in speeding up causal graph development.