A fundamental problem in the analysis of relational data---graphs, matrices or higher-dimensional arrays---is to extract a summary of the common structure underlying relations between individual entities. A successful approach is latent variable modeling, which summarizes this structure as an embedding into a suitable latent space. Results in probability theory, due to Aldous, Hoover and Kallenberg, show that relational data satisfying an exchangeability property can be represented in terms of a random measurable function. In a Bayesian model, this function constitutes the natural model parameter, and we discuss how available latent variable models can be classified according to how they implicitly approximate this parameter. We obtain a flexible yet simple model for relational data by representing the parameter function as a Gaussian process. Efficient inference draws on the large available arsenal of Gaussian process algorithms; sparse approximations prove particularly useful. We demonstrate applications of the model to network data and clarify its relation to models in the literature, several of which emerge as special cases.