Modern neural networks are often augmented with an attention mechanism, which tells the network where to focus within the input. We propose in this paper a new framework for sparse and structured attention, building upon a smoothed max operator. We show that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes softmax and a slight generalization of the recently-proposed sparsemax as special cases. However, we also show how our framework can incorporate modern structured penalties, resulting in more interpretable attention mechanisms, that focus on entire segments or groups of an input. We derive efficient algorithms to compute the forward and backward passes of our attention mechanisms, enabling their use in a neural network trained with backpropagation. To showcase their potential as a drop-in replacement for existing ones, we evaluate our attention mechanisms on three large-scale tasks: textual entailment, machine translation, and sentence summarization. Our attention mechanisms improve interpretability without sacrificing performance; notably, on textual entailment and summarization, we outperform the standard attention mechanisms based on softmax and sparsemax.
Vlad Niculae (Cornell University)
Mathieu Blondel (NTT)
Research scientist at NTT CS Labs.
More from the Same Authors
2017 Poster: Multi-output Polynomial Networks and Factorization Machines »
Mathieu Blondel · Vlad Niculae · Takuma Otsuka · Naonori Ueda
2016 Poster: Higher-Order Factorization Machines »
Mathieu Blondel · Akinori Fujino · Naonori Ueda · Masakazu Ishihata