Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Information-Theoretic Principles in Cognitive Systems (InfoCog)

Practical estimation of ensemble accuracy

Simi Haber · Yonatan Wexler


Abstract:

Ensemble learning combines several individual models to obtain better generalization performance. In this work we present a method for estimating the joint power of several classifiers without jointly optimizing them.The essence of the method is a combinatorial bound on the number of mistakes the ensemble is likely to make. The bound can be efficiently approximated in time linear in the number of samples allowing, for example, to choose a combination of classifiers that are likely to produce higher joint accuracy. Moreover, the bound applies on unlabeled data, making it both accurate and practical in modern setting of unsupervised learning. We demonstrate the method on popular large-scale face recognition datasets which provide a useful playground for fine-grain classification tasks using noisy data over many classes.The proposed framework fits neatly in trending practices of unsupervised learning. It is a measure of the inherent independence of a set of classifiers not relying on extra information such as another classifier or labeled data.

Chat is not available.