Graph Convolutional Neural Networks: The Mystery of Generalization
Prof. Gitta Kutyniok, Mathematical Institute of the University of Munich
The tremendous importance of graph structured data due to
recommender systems or social networks led to the introduction of
graph convolutional neural networks (GCN). Those split into spatial
and spectral GCNs, where in the later case filters are defined as
elementwise multiplication in the frequency domain of a graph.
Since often the dataset consists of signals defined on many
different graphs, the trained network should generalize to signals
on graphs unseen in the training set. One instance of this problem
is the transferability of a GCN, which refers to the condition that
a single filter or the entire network have similar repercussions on
both graphs, if two graphs describe the same phenomenon. However, for a long time it was believed that spectral filters are not transferable.
In this talk we aim at debunking this common misconception by
showing that if two graphs discretize the same continuous metric
space, then a spectral filter or GCN has approximately the same
repercussion on both graphs. Our analysis also accounts for large
graph perturbations as well as allows graphs to have completely
different dimensions and topologies, only requiring that both
graphs discretize the same underlying continuous space. Numerical
results then even imply that spectral GCNs are superior to spatial
GCNs if the dataset consists of signals defined on many different
This is joint work with R. Levie, W. Huang, L. Bucci, and M.