We address domain generalization by viewing the underlying distributional shift as interpolation between domains and subsequently devise an algorithm to learn a representation that is robustly invariant under such interpolation, which we coin our approach as \textit{interpolation robustness}. Through extensive experiments, we show that our approach outperforms significantly the recent state-of-the-art algorithm \citet{NEURIPS2021_2a271795} and the baseline DeepAll in a limited data setting on PACS and VLCS datasets.