Timezone: »

Bregman Divergence for Stochastic Variance Reduction: Saddle-Point and Adversarial Prediction
Zhan Shi · Xinhua Zhang · Yaoliang Yu

Tue Dec 05 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #10

Adversarial machines, where a learner competes against an adversary, have regained much recent interest in machine learning. They are naturally in the form of saddle-point optimization, often with separable structure but sometimes also with unmanageably large dimension. In this work we show that adversarial prediction under multivariate losses can be solved much faster than they used to be. We first reduce the problem size exponentially by using appropriate sufficient statistics, and then we adapt the new stochastic variance-reduced algorithm of Balamurugan & Bach (2016) to allow any Bregman divergence. We prove that the same linear rate of convergence is retained and we show that for adversarial prediction using KL-divergence we can further achieve a speedup of #example times compared with the Euclidean alternative. We verify the theoretical findings through extensive experiments on two example applications: adversarial prediction and LPboosting.

Author Information

Zhan Shi (University of Illinois at Chicago)
Xinhua Zhang (University of Illinois at Chicago (UIC))
Yaoliang Yu (University of Waterloo)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors