Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Graph Representation Learning

Marco Gori: Graph Representations, Backpropagation, and Biological Plausibility

Marco Gori


Abstract:

Neural architectures and many learning environments can conveniently be expressed by graphs. Interestingly, it has been recently shown that the notion of receptive field and the correspondent convolutional computation can nicely be extended to graph-based data domains with successful results. On the other hand, graph neural networks (GNN) were introduced by extending the notion of time-unfolding, which ended up into a state-based representation along with a learning process that requires state relaxation to a fixed-point. It turns out that algorithms based on this approach applied to learning tasks on collections of graphs are more computationally expensive than recent graph convolutional nets.

In this talk we advocate the importance of refreshing state-based graph representations in the spirit of the early introduction of GNN for the case of “network domains” that are characterized by a single graph (e.g. traffic nets, social nets). In those cases, data over the graph turn out to be a continuous stream, where time plays a crucial role and blurs the classic statistical distinction between training and test set. When expressing the graphical domain and the neural network within the same Lagrangian framework for dealing with constraints, we show novel learning algorithms that seem to be very appropriate for network domains. Finally, we show that in the proposed learning framework, the Lagrangian multipliers are associated with the delta term of Backpropagation, and provide intriguing arguments on its biological plausibility.

Live content is unavailable. Log in and register to view live content