Timezone: »

 
Poster
Understanding the Generalization Benefit of Model Invariance from a Data Perspective
Sicheng Zhu · Bang An · Furong Huang

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @

Machine learning models that are developed to be invariant under certain types of data transformations have shown improved generalization in practice. However, a principled understanding of why invariance benefits generalization is limited. Given a dataset, there is often no principled way to select "suitable" data transformations under which model invariance guarantees better generalization. This paper studies the generalization benefit of model invariance by introducing the sample cover induced by transformations, i.e., a representative subset of a dataset that can approximately recover the whole dataset using transformations. For any data transformations, we provide refined generalization bounds for invariant models based on the sample cover. We also characterize the "suitability" of a set of data transformations by the sample covering number induced by transformations, i.e., the smallest size of its induced sample covers. We show that we may tighten the generalization bounds for "suitable" transformations that have a small sample covering number. In addition, our proposed sample covering number can be empirically evaluated and thus provides a guidance for selecting transformations to develop model invariance for better generalization. In experiments on multiple datasets, we evaluate sample covering numbers for some commonly used transformations and show that the smaller sample covering number for a set of transformations (e.g., the 3D-view transformation) indicates a smaller gap between the test and training error for invariant models, which verifies our propositions.

Author Information

Sicheng Zhu (University of Maryland, College Park)
Bang An (University of Maryland, College Park)
Furong Huang (University of Maryland)

Furong Huang is an assistant professor of computer science. Huang’s research focuses on machine learning, high-dimensional statistics and distributed algorithms—both the theoretical analysis and practical implementation of parallel spectral methods for latent variable graphical models. Some applications of her research include developing fast detection algorithms to discover hidden and overlapping user communities in social networks, learning convolutional sparse coding models for understanding semantic meanings of sentences and object recognition in images, healthcare analytics by learning a hierarchy on human diseases for guiding doctors to identify potential diseases afflicting patients, and more. Huang recently completed a postdoctoral position at Microsoft Research in New York.

More from the Same Authors