Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Shared Visual Representations in Human and Machine Intelligence

Cyclic orthogonal convolutions for long-range integration of features

Federica Freddi · Jezabel Garcia · Michael Bromberg · Sepehr Jalali · Da-shan Shiu · Alvin Chua · Alberto Bernacchia


Abstract: In Convolutional Neural Networks (CNNs) information flows across a small neighbourhood of each pixel of an image, preventing long-range integration of features before reaching deep layers in the network. Inspired by the neurons of the human visual cortex responding to similar but distant visual features, we propose a novel architecture that allows efficient information flow between features $z$ and locations $(x,y)$ across the entire image with a small number of layers.This architecture uses a cycle of three orthogonal convolutions, not only in $(x,y)$ coordinates, but also in $(x,z)$ and $(y,z)$ coordinates. We stack a sequence of such cycles to obtain our deep network, named CycleNet. When compared to CNNs of similar size, our model obtains competitive results at image classification on CIFAR-10 and ImageNet datasets.We hypothesise that long-range integration favours recognition of objects by shape rather than texture, and we show that CycleNet transfers better than CNNs to stylised images. On the Pathfinder challenge, where integration of distant features is crucial, CycleNet outperforms CNNs by a large margin. Code has been made available at: https://github.com/netX21/Submission

Chat is not available.