Skip to yearly menu bar Skip to main content


Poster

Energy Transformer

Benjamin Hoover · Yuchen Liang · Bao Pham · Rameswar Panda · Hendrik Strobelt · Duen Horng Chau · Mohammed Zaki · Dmitry Krotov

Great Hall & Hall B1+B2 (level 1) #508
[ ] [ Project Page ]
[ Paper [ Slides [ Poster [ OpenReview
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract:

Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory. Attention is the power-house driving modern deep learning successes, but it lacks clear theoretical foundations. Energy-based models allow a principled approach to discriminative and generative tasks, but the design of the energy functional is not straightforward. At the same time, Dense Associative Memory models or Modern Hopfield Networks have a well-established theoretical foundation, and allow an intuitive design of the energy function. We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function, which is responsible for representing the relationships between the tokens. In this work, we introduce the theoretical foundations of ET, explore its empirical capabilities using the image completion task, and obtain strong quantitative results on the graph anomaly detection and graph classification tasks.

Chat is not available.