Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #805
Generalization Analysis on Learning with a Concurrent Verifier
Masaaki Nishino · Kengo Nakamura · Norihito Yasuda
[ Poster [ OpenReview

Machine learning technologies have been used in a wide range of practical systems.In practical situations, it is natural to expect the input-output pairs of a machine learning model to satisfy some requirements.However, it is difficult to obtain a model that satisfies requirements by just learning from examples.A simple solution is to add a module that checks whether the input-output pairs meet the requirements and then modifies the model's outputs. Such a module, which we call a {\em concurrent verifier} (CV), can give a certification, although how the generalizability of the machine learning model changes using a CV is unclear. This paper gives a generalization analysis of learning with a CV. We analyze how the learnability of a machine learning model changes with a CV and show a condition where we can obtain a guaranteed hypothesis using a verifier only in the inference time.We also show that typical error bounds based on Rademacher complexity will be no larger than that of the original model when using a CV in multi-class classification and structured prediction settings.