`

Timezone: »

 
Farkas' Theorem of the Alternative for Prior Knowledge in Deep Networks
Jeffery Kline · Joseph Bockhorst

In this paper, prior knowledge expressed in the form of polyhedral sets is introduced into the training of a deep neural network. This approach to using prior knowledge extends earlier work that applies Farkas' Theorem of the Alternative to linear support vector machine classifiers. Through this extension, we gain the ability to sculpt the decision boundary of a neural network by training on a set of discrete data while simultaneously fitting an uncountable number of points that live within a polytope that is defined by prior knowledge. We demonstrate viability of this approach on both synthetic and benchmark data sets.

Author Information

Jeffery Kline (American Family Insurance)
Joseph Bockhorst (America Family Insurance)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors