Skip to yearly menu bar Skip to main content


Poster

Using Noise to Infer Aspects of Simplicity Without Learning

Zachery Boner · Harry Chen · Lesia Semenova · Ronald Parr · Cynthia Rudin

East Exhibit Hall A-C #3206
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Noise in data significantly influences decision-making in the data science process. In fact, it has been shown that noise in data generation processes leads practitioners to find simpler models. However, an open question still remains: what is the degree of model simplification we can expect under different noise levels? In this work, we address this question by investigating the relationship between the amount of noise and model simplicity across various hypothesis spaces, focusing on decision trees and linear models. We formally show that noise acts as an implicit regularizer for several different noise models. Furthermore, we prove that Rashomon sets (sets of near-optimal models) constructed with noisy data tend to contain simpler models than corresponding Rashomon sets with non-noisy data. Additionally, we show that noise expands the set of ``good'' features and consequently enlarges the set of models that use at least one good feature. Our work offers theoretical guarantees and practical insights for practitioners and policymakers on whether simple-yet-accurate machine learning models are likely to exist, based on knowledge of noise levels in the data generation process.

Live content is unavailable. Log in and register to view live content