Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: from Theory to Practice

Scalable Deep Potentials as Implicit Hierarchical Semi-Separable Operators

Michael Poli · Stefano Massaroli · Christopher RĂ© · Stefano Ermon


Abstract:

Direct application of Transformer architectures in scientific domains poses computational challenges, due to quadratic scaling in the number of inputs. In this work, we propose an alternative method based on hierarchical semi-separable matrices (HSS), a class of rank-structured operators with linear-time evaluation algorithms. Through connections between linearized attention and HSS, we devise an implicit hierarchical parametrization strategy that interpolates between linear and quadratic attention, achieving both subquadratic scaling and high accuracy. We demonstrate the effectiveness of the proposed approach on the approximation of potentials from computational physics.

Chat is not available.