A graph is a data structure that models a set of objects (nodes) and their relationships (edges). In recent years, due to the powerful expressive power of graph structure, the research of analyzing graphs with machine learning methods has attracted more and more attention. Graph Neural Network (GNN) is a class of deep learning-based methods for processing graph-domain information. Due to its good performance and interpretability, GNN has recently become a widely used graph analysis method.

The first motivation for GNN stems from Convolutional Neural Networks (CNN). The widespread application of CNN has brought breakthroughs in the field of machine learning and opened a new era of deep learning. However, CNN can only operate on regular Euclidean data, such as images (2D grid) and text (1D sequence). How to apply CNN to the non-Euclidean space of the graph structure has become a key problem to be solved by the GNN model.

Another motivation for GNN comes from Graph Embedding, which learns low-dimensional vector space representations of nodes, edges or subgraphs in a graph. Methods such as DeepWalk, LINE, and SDNE have achieved great success in the field of network representation learning. However, these methods are computationally complex and not optimal on large-scale graphs, and GNN aims to solve these problems.

⚫Development History

The concept of graph neural network was first proposed by Gori et al. in 2005 and further clarified by Scarselli et al. These early works iteratively propagate neighborhood information through recurrent neural architectures to learn representations of target nodes until a stable fixed point is reached. This process is computationally intensive, and a lot of research has recently been devoted to solving this difficult problem. In general, graph neural networks represent all deep learning methods for graph data.

Motivated by the great success of convolutional networks in computer vision, many approaches have recently emerged to redefine the concept of convolution for graph data. These methods fall under the category of graph convolutional networks (GCNs). The first important research on graph convolutional networks was presented in 2013 by Bruna et al., who developed a variant of graph convolution based on spectral graph theory. Since then, spectral-based graph convolutional networks have been continuously improved, expanded, and advanced. Since spectral methods usually process the entire graph at the same time and are difficult to parallelize or scale to large graphs, space-based graph convolutional networks have begun to develop rapidly. These methods directly perform convolution on the graph structure by aggregating the information of neighboring nodes. Combined with a sampling strategy, computations can be performed on a batch of nodes rather than the entire graph, a practice that promises to improve efficiency. In addition to graph convolutional networks, many alternative graph neural networks have been developed in recent years. These methods include graph attention networks (GAT), graph autoencoders, graph generative networks, and graph spatiotemporal networks.

Battaglia et al. positioned graph networks as building blocks for learning from relational data and reviewed partial graph neural networks under a unified framework. However, their overall framework is highly abstract, losing the insights of each method in the original paper. A partial survey of graph attention models, a type of graph neural network, was conducted by Lee et al. Recently, Zhang et al. presented a state-of-the-art survey on deep learning on graphs, ignoring studies on graph generative networks and graph spatiotemporal networks. In conclusion, none of the existing studies provide a comprehensive review of GNNs, only part of GNNs are covered and the examined studies are limited, thus missing the recent progress of alternatives to GNNs, such as GNNs and GNNs. space-time network.

⚫Future Development Direction

Deepen your network. The success of deep learning lies in deep neural architectures. For example in image classification the model ResNet has 152 layers. But in graph networks, empirical studies show that as the number of network layers increases, the model performance drops sharply. This is due to the effect of graph convolution, as it essentially pushes the representations of adjacent nodes closer to each other, so in theory, with an infinite number of convolutions, the representations of all nodes will converge to a single point.

Feel wild. The receptive field of a node refers to a group of nodes, including the central node and its neighbor nodes. The number of neighbors (nodes) of a node follows a power law distribution. Some nodes may have only one neighbor, while others have thousands of neighbors. Despite the sampling strategy, how to select the representative receptive field of nodes remains to be explored.

scalability. Most graph neural networks do not scale well to large graphs. The main reason is that when stacking multiple layers of a graph convolution, the final state of a node involves the hidden states of a large number of its neighbors, making backpropagation very complicated. Although some methods try to improve model efficiency through fast sampling and sub-graph training, they still cannot scale to deep architectures for large graphs.

What is a graph neural network?

Dynamic and heterogeneous. Most current graph neural networks deal with static homogeneous graphs. On the one hand, assume that the graph architecture is fixed. On the other hand, assume that the nodes and edges of the graph come from the same source. However, these two assumptions are unrealistic in many cases. In a social network, a new person may join at any time, and a pre-existing person may also leave the social network. In a recommender system, products may have different types, and their output forms may also be different, maybe text, maybe images. Therefore, new methods should be developed to handle dynamic and heterogeneous graph structures.