Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Symmetry and Geometry in Neural Representations

INRFormer: Neuron Permutation Equivariant Transformer on Implicit Neural Representations

Lei Zhou · Varun Belagali · Joseph Bae · Prateek Prasanna · Dimitris Samaras


Abstract:

Implicit Neural Representations (INRs) have demonstrated both precision in continuous data representation and compactness in encapsulating high-dimensional data. Yet, much of contemporary research remains centered on data reconstruction using INRs, with limited exploration into processing INRs themselves. In this paper, we endeavor to develop a model tailored to process INRs explicitly for computer vision tasks. We conceptualize INRs as computational graphs with neurons as nodes and weights as edges. To process INR graphs, we propose INRFormer consisting of the node blocks and the edge blocks alternatively. Within the node block, we further propose SlidingLayerAttention (SLA), which performs attention on nodes of three sequential INR layers. This sliding mechanism of the SLA across INR layers enables each layer's nodes to access a broader scope of the entire graph's information. In terms of the edge block, every edge's feature vector gets concatenated with the features of its two linked nodes, followed by a projection via an MLP. Ultimately, we formulate the visual recognition as INR-to-INR (inr2inr) translations. That is, INRFormer transforms the input INR, which maps coordinates to image pixels, to a target INR, which maps the coordinates to the labels. We demonstrate INRFormer on CIFAR10.

Chat is not available.