Skip to yearly menu bar Skip to main content


Poster

SOL: Sampling-based Optimal Linear bounding of arbitrary scalar functions

Yuriy Biktairov · Jyotirmoy Deshmukh

Great Hall & Hall B1+B2 (level 1) #727
[ ]
[ Paper [ Poster [ OpenReview
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract: Finding tight linear bounds for activation functions in neural networksis an essential part of several state of the art neural network robustness certification tools. An activation function is an arbitrary, nonlinear,scalar function $f: \mathbb{R}^d \rightarrow \mathbb{R}$. In the existing work on robustness certification, such bounds have been computed using human ingenuity for a handful of the most popular activation functions. While a number of heuristics have been proposed for bounding arbitrary functions,no analysis of the tightness optimality for general scalar functions has been offered yet, to the best of our knowledge. We fill this gap by formulating a concise optimality criterion for tightness of the approximation which allows us tobuild optimal bounds for any function convex in the region of interest $R$. Fora more general class of functions Lipshitz-continuous in $R$ we propose a sampling-based approach (SOL) which, given an instance of the bounding problem, efficiently computes the tightest linear bounds within a given $\varepsilon > 0$ threshold. We leverage an adaptive sampling technique to iteratively build a setof sample points suitable for representing the target activation function. While the theoretical worst case time complexity of our approach is$O(\varepsilon^{-2d})$,it typically only takes $O(\log^{\beta} \frac{1}{\varepsilon})$ time for some $\beta \ge 1$ and isthus sufficiently fast in practice. We provide empirical evidence of SOL's practicalityby incorporating it into a robustness certifier and observing that itproduces similar or higher certification rates while taking as low as quarter of the time compared to the other methods.

Chat is not available.