## Unifying lower bounds on prediction dimension of convex surrogates

### Jessica Finocchiaro · Rafael Frongillo · Bo Waggoner

Keywords: [ ]

[ Abstract ]
[
Tue 7 Dec 4:30 p.m. PST — 6 p.m. PST

Abstract: The convex consistency dimension of a supervised learning task is the lowest prediction dimension $d$ such that there exists a convex surrogate $L : \mathbb{R}^d \times \mathcal Y \to \mathbb R$ that is consistent for the given task. We present a new tool based on property elicitation, $d$-flats, for lower-bounding convex consistency dimension. This tool unifies approaches from a variety of domains, including continuous and discrete prediction problems. We use $d$-flats to obtain a new lower bound on the convex consistency dimension of risk measures, resolving an open question due to Frongillo and Kash (NeurIPS 2015). In discrete prediction settings, we show that the $d$-flats approach recovers and even tightens previous lower bounds using feasible subspace dimension.

Chat is not available.