Skip to yearly menu bar Skip to main content


Poster

Controllable Invariance through Adversarial Feature Learning

Qizhe Xie · Zihang Dai · Yulun Du · Eduard Hovy · Graham Neubig

Pacific Ballroom #121

Keywords: [ Supervised Deep Networks ] [ Computer Vision ] [ Adversarial Networks ] [ Natural Language Processing ] [ Denoising ]


Abstract:

Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning. In this paper, we tackle the problem of learning representations invariant to a specific factor or trait of data. The representation learning process is formulated as an adversarial minimax game. We analyze the optimal equilibrium of such a game and find that it amounts to maximizing the uncertainty of inferring the detrimental factor given the representation while maximizing the certainty of making task-specific predictions. On three benchmark tasks, namely fair and bias-free classification, language-independent generation, and lighting-independent image classification, we show that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance.

Chat is not available.