Skip to yearly menu bar Skip to main content


Poster

Deconfounded Representation Similarity for Comparison of Neural Networks

Tianyu Cui · Yogesh Kumar · Pekka Marttinen · Samuel Kaski

Hall J (level 1) #408

Keywords: [ RSA ] [ representation similarity ] [ covariate adjustment regression ] [ functional similarity ] [ Deep Neural Networks ] [ CKA ]


Abstract:

Similarity metrics such as representational similarity analysis (RSA) and centered kernel alignment (CKA) have been used to understand neural networks by comparing their layer-wise representations. However, these metrics are confounded by the population structure of data items in the input space, leading to inconsistent conclusions about the \emph{functional} similarity between neural networks, such as spuriously high similarity of completely random neural networks and inconsistent domain relations in transfer learning. We introduce a simple and generally applicable fix to adjust for the confounder with covariate adjustment regression, which improves the ability of CKA and RSA to reveal functional similarity and also retains the intuitive invariance properties of the original similarity measures. We show that deconfounding the similarity metrics increases the resolution of detecting functionally similar neural networks across domains. Moreover, in real-world applications, deconfounding improves the consistency between CKA and domain similarity in transfer learning, and increases the correlation between CKA and model out-of-distribution accuracy similarity.

Chat is not available.