Having similar behavior at train-time and test-time---what we call a
What You See Is What You Get (WYSIWYG)'' property---is desirable in machine learning. However, models trained with standard stochastic gradient descent (SGD) are known to not capture it. Their behaviors such as subgroup performance, or adversarial robustness can be very different during training and testing. We show that Differentially-Private (DP) training provably ensures the high-level WYSIWYG property, which we quantify using a notion of Distributional Generalization (DG). Applying this connection, we introduce new conceptual tools for designing deep-learning methods by reducing generalization concerns to optimization ones: to mitigate unwanted behavior at test time, it is provably sufficient to mitigate this behavior on the train datasets. By applying this novel design principle, which bypassespathologies'' of SGD, we construct simple algorithms that are competitive with SOTA in several distributional robustness applications, significantly improve the privacy vs. disparate impact tradeoff of DP-SGD, and mitigate robust overfitting in adversarial training. Finally, we also improve on known theoretical bounds relating DP, stability, and distributional generalization.