|
# Training-Free Graph Neural Networks And The Power Of Labels As Features |
|
|
|
Ryoma Sato rsato@nii.ac.jp National Institute of Informatics Reviewed on OpenReview: *https: // openreview. net/ forum? id= 7DzU88VrNU* |
|
|
|
## Abstract |
|
|
|
We propose training-free graph neural networks (TFGNNs), which can be used without training and can also be improved with optional training, for transductive node classification. |
|
|
|
We first advocate labels as features (LaF), which is an admissible but not explored technique. |
|
|
|
We show that LaF provably enhances the expressive power of graph neural networks. We design TFGNNs based on this analysis. In the experiments, we confirm that TFGNNs outperform existing GNNs in the training-free setting and converge with much fewer training iterations than traditional GNNs. |
|
|
|
## 1 Introduction |
|
|
|
Graph Neural Networks (GNNs) [12, 41] are popular machine learning models for processing graph data. |
|
|
|
GNNs show strong empirical performance in various machine learning and data mining tasks, including chemical modeling [10, 19], question answering [34, 44], and recommender systems [7, 15, 47, 49, 55]. |
|
|
|
One of the standard problem settings for GNNs is transductive node classification, where the goal is to predict the labels of the test nodes in a graph given the labels of other nodes. This setting has many applications, including document classification [20, 54], e-commerce [42, 56], and social analysis [13, 56]. Many GNNs, including Graph Convolutional Networks (GCNs) [20] and Graph Attention Networks (GATs) [45], tackled transductive node classification and showed excellent performance. |
|
|
|
One of the challenges of GNNs is the computational cost. There are many huge graphs in practice, such as social networks and Web graphs, which contain billions of nodes. It is sometimes prohibitive to even scan the entire graph, e.g., the whole World Wide Web. Many methods to speed up GNNs have been proposed. The basic approach is sampling nodes and/or edges to reduce the graph size [4, 13, 56, 59]. In the extreme case, Sato et al. [40] proposed constant time graph neural networks by neighbor sampling. Although it drastically reduces the computational cost per iteration, it still requires many training iterations. PinSAGE [55] adopts parallel training with MapReduce as well as importance pooling to speed up the training. Although PinSAGE succeeded in training GNNs with Web-scale graphs, it requires massive computational resources. It is still challenging to instantly use GNNs with limited computational resources. In this paper, we propose training-free graph neural networks (TFGNNs). |
|
|
|
To design TFGNNs, we first introduce the idea of labels as features (LaF). LaF uses the node labels as features, which is admissible in the framework of transductive node classification. GNNs with LaF can utilize the label information, such as the class distribution in the neighboring nodes, to compute the node embeddings, which contain much more information than the embeddings with only the node features. We show that LaF provably enhances the expressive power of GNNs. TFGNNs can be used without training and deployed instantly as soon as the model is initialized. This property reduces the burden of hyperparameter tuning as no training process is involved in this mode. |
|
|
|
TFGNNs can also be improved with optional training. Users can use TFGNNs with the training-free mode or train TFGNNs for few iterations when the computational resources for training are limited. This property is useful for online learning, where training data come in a streaming manner, and the model should be updated instantly. Users can also fully train TFGNNs when the computational resources are abundant or the accuracy is required. TFGNNs enjoy the best of both worlds of nonparametric models and GNNs. |
|
|
|
In the experiments, we confirm that TFGNNs outperform existing GNNs in the training-free setting and converge with much fewer training iterations than traditional GNNs. |
|
|
|
The contributions of this paper are as follows: |
|
- We advocate the use of LaF in transductive learning. |
|
|
|
- We formally show that LaF strengthens the expressive power of GNNs. |
|
|
|
- We proposed training-free graph neural networks (TFGNNs). - We confirm that TFGNNs outperform existing GNNs in the training-free setting. |
|
|
|
Reproducibility: Our code is available at https://github.com/joisino/laf. |
|
|
|
## 2 Background 2.1 Notations |
|
|
|
For every positive integer n β Z+, [n] denotes the set {1, 2*, . . . , n*}. A graph is defined as a tuple of (i) the set V of nodes, (ii) the set E of edges, and (iii) the node features X = [x1, x2*, . . . ,* xn] |
|
β€ β R |
|
nΓd. Without loss of generality, we assume that the nodes are numbered with 1, 2*, . . . , n*. Y denotes the set of labels. yv β R |
|
Y |
|
denotes the one-hot encoding of the label of node v. For every node v β V , N (v) denotes the set of neighbors of node v. We adopt the numpy-like notation for indexing. For example, X:,1 denotes the first column of X, X:,β1 denotes the last column of X, X:,β5: denotes the last five columns of X, and X:,:β5 denotes all the columns except for the last five columns of X. |
|
|
|
## 2.2 Transductive Node Classification |
|
|
|
Problem (Transductive Node Classification). |
|
|
|
Input: A graph G = (*V, E,* X), labelled nodes Vtrain β V , and node labels Ytrain β YVtrain of Vtrain. |
|
|
|
Output: Predicted labels Ytest β YVtest of Vtest = V \ Vtrain. |
|
|
|
There are two settings for the node classification problem: transductive and inductive. In transductive node classification, one graph and the labels of some of its nodes are given, and we predict the labels of the remaining nodes. This setting uses the same graph for training and testing. This is in contrast to the inductive setting, which uses different graphs for training and testing. For example, in spam account detection, annotating spam accounts on Facebook and using the trained model on Facebook is an example of the transductive setting, while using the trained model on X (Twitter) is an example of the inductive setting. |
|
|
|
Transductive node classification is a popular setting in the GNN community; it has been employed in wellknown studies, such as GCNs [20] and GATs [45], and has been adopted in popular benchmarks such as Cora, PubMed, and CiteSeer. There are also many practical applications of transductive node classification, such as document classification [20, 54] and fraud detection[23, 24, 46]. |
|
|
|
## 2.3 Graph Neural Networks |
|
|
|
GNNs are a popular solution for transductive node classification. We follow the message-passing framework of GNNs [10]. A message passing GNN is defined as follows: |
|
|
|
$\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}$ ($\forall v\in V$), $\mathbf{h}_{v}^{(l)}=f_{\text{agg}}^{(l)}(\mathbf{h}_{v}^{(l-1)},\{\mathbf{h}_{u}^{(l-1)}\mid u\in\mathcal{N}(v)\}\}$) ($\forall l\in[L],v\in V$), $\mathbf{\hat{y}}_{v}=f_{\text{pred}}(\mathbf{h}_{v}^{(L)})$ ($\forall v\in V$), |
|
where f |
|
(l) |
|
agg is the aggregation function and fpred is the prediction head, which are typically modeled by neural networks. |
|
|
|
## 3 Laf Is Admissible, But Not Explored Well |
|
|
|
We ask the readers to recall the problem setting of transductive node classification. We are given node labels yv of the training nodes. A typical approach for this problem is to feed node features xv for a training node v to the model, predict the labels of node v, compute the loss based on the ground truth label yv, and update the model parameters. However, how we use yv is not limited. We can use yv as features of node v as well as for the loss function. This is the idea of LaF. |
|
|
|
GNNs with LaF initialize the node embeddings in Eq. (1) as |
|
|
|
$$(1)$$ |
|
$$\mathbf{\Sigma}^{(2)}$$ |
|
$$\left({\mathrm{3}}\right)$$ |
|
$$\mathbf{h}_{v}^{(0)}=[\mathbf{x}_{v};{\tilde{\mathbf{y}}}_{v}]\in\mathbb{R}^{d+1+|{\mathcal{Y}}|},$$ |
|
$$\left(4\right)$$ |
|
d+1+|Y|, (4) |
|
where [Β·; Β·] denotes the concatenation of vectors, and |
|
|
|
$$\tilde{\mathbf{y}}_{v}=\begin{cases}[1;\mathbf{y}_{v}]&(v\in V_{\text{train}}),\\ \mathbf{0}_{1+|\mathcal{V}|}&(v\in V_{\text{test}}),\end{cases}\tag{1}$$ |
|
$$\left(5\right)$$ |
|
|
|
is the label vector of node v, and 0d is the zero vector of dimension d. LaF enables GNNs to utilize the label information, such as the class distribution in the neighboring nodes, to compute the node embeddings. Such embeddings are expected to be more informative than the embeddings without the label information. LaF |
|
is admissible in the sense that it uses only the information available in the transductive setting. |
|
|
|
We emphasize that LaF has not been explored well in the literature on GNNs, regardless of its simplicity, with some notable exceptions [1, 50] (see Section 7 for detailed discussions). For example, GCNs [20] and GATs [45] adopt the transductive setting, and they are allowed to use the label information as features. However, they initialize the node embeddings as h |
|
(0) |
|
v = xv without using the label information. One of the contributions of this paper is that we affirm that LaF is allowed in the transductive setting. |
|
|
|
We should be careful when training GNNs with LaF. LaF may harm the generalization performance by inducing a shortcut of copying the label feature h |
|
(0) |
|
v,d+1: to the prediction. To prevent this, we should remove the label of the center nodes in the minibatch and treat them as test nodes. Specifically, let B β Vtrain be the set of nodes in the minibatch and we set |
|
|
|
$$\tilde{\mathbf{y}}_{v}=\begin{cases}[1;\mathbf{y}_{v}]&(v\in V_{\text{train}}\setminus B),\\ \mathbf{0}_{1+|\mathcal{Y}|}&(v\in V_{\text{test}}\cup B),\end{cases}\tag{6}$$ |
|
|
|
and predict the label yΛv for v β B, and compute the loss based on yΛv and yv. This simulates the transductive setting where the label information of the test nodes is missing, and GNNs learn how to predict the labels of the test nodes based on the label information and node features of the surrounding nodes. |
|
|
|
## 4 Laf Strengthens The Expressive Power Of Gnns |
|
|
|
We show that LaF provably strengthens the expressive power of GNNs. Specifically, we show that GNNs with LaF can represent label propagation [58], an important model for transductive node classification, while GNNs without LaF cannot. This result is interesting in its own right, and it also motivates the design of TFGNNs. Label propagation is a classic method for transductive node classification. It starts random walks from a test node and outputs the label distribution of the labeled nodes the random walks first hit. The following theorem shows that GNNs with LaF can represent label propagation. Theorem 4.1. *GNNs with LaF can approximate label propagation with any precision. Specifically, there* exists a series of GNNs {f |
|
(l) |
|
agg}l and fpred such that for any positive Ξ΅*, for any connected graph* G = (*V, E,* X), |
|
for any labeled nodes Vtrain β V and node labels Ytrain β YVtrain and test node v β V \ Vtrain*, there exists* L β |
|
Z+ such that l(β₯ L)*-th GNN* (f |
|
(1) |
|
agg*, . . . , f*(l) |
|
agg, fpred) with LaF outputs the approximation of label propagation with the error at most Ξ΅*, i.e.,* |
|
|
|
$$\left\|{\hat{y}}_{v}-{\hat{y}}_{v}^{L P}\right\|_{1}\leq\varepsilon,$$ |
|
$$\left(7\right)$$ |
|
$$({\mathfrak{g}})$$ |
|
|
|
β€ Ξ΅, (7) |
|
where yΛ |
|
LP *is the output of label propagation for test node* v. |
|
|
|
Proof. We prove the theorem by construction. Let p*l,v,i* def = Pr[The random walk from node v hits Vtrain within l steps and the first hit label is i]. (8) |
|
For the labeled nodes, this is a constant, i.e., |
|
|
|
s $V_{\text{train}}$ within $l$ steps and. |
|
$$p_{l,v,i}=1[i=\mathbf{y}_{v}]\quad(\forall l\in\mathbb{Z}_{\geq0},v\in V_{\mathrm{train}},i\in\mathcal{Y}).$$ |
|
$$(10)$$ |
|
p*l,v,i* = 1[i = yv] (βl β Zβ₯0, v β Vtrain, i β Y). (9) |
|
For the other nodes, this can be recursively computed as follows: These equations can be represented by GNNs with LaF. Specifically, the base case |
|
|
|
$$p_{0,v,i}=\begin{cases}1[i=\mathbf{y}_{v}]&(v\in V_{\text{train}}),\\ 0&(v\in V\setminus V_{\text{train}}),\end{cases}\tag{1}$$ |
|
$$(14)$$ |
|
|
|
can be computed from yΛv in h |
|
(0) |
|
v . Let f |
|
(l) |
|
agg always concat the first argument (i.e., h |
|
(lβ1) |
|
v in Eq. (2)) |
|
to the output so that the GNN can keep the information of the input. f |
|
(l) |
|
agg handles two cases by yΛv,1 β {0, 1}, i.e., whether v is in Vtrain or not. If v is in Vtrain, f |
|
(l) |
|
agg just outputs 1[i = yv], which can be computed by yΛv in h |
|
(lβ1) |
|
v . If v is not in Vtrain, f |
|
(l) |
|
agg aggregates plβ1*,u,i* from u in N (v) and takes the average, i.e., Eq. (13), which can be realized by message passing in the second argument of f |
|
(l) |
|
agg. |
|
|
|
$$p_{0,v,i}=0\quad(\forall v\in V\setminus V_{\mathrm{train}},i\in{\mathcal{Y}}),$$ |
|
|
|
p0*,v,i* = 0 (βv β V \ Vtrain, i β Y), (10) |
|
$p_{l,v,i}$ $$=\sum_{u\in\mathcal{N}(v)}\Pr[\text{The first step is}v\to u]\cdot\Pr[\text{The random walk from node$v$bits$V_{\text{train}}$within}]$$l\text{steps and the first hit label is}i\mid\text{The first step is}v\to u]$$ $$=\sum_{u\in\mathcal{N}(v)}\frac{1}{\deg(v)}\cdot\Pr[\text{The random walk from node$u$bits$V_{\text{train}}$within}(l-1)\text{steps}]$$ and the first hit label is $i]$ $$=\frac{1}{\deg(v)}\sum_{u\in\mathcal{N}(v)}p_{l-1,u,i}.$$ |
|
(11) |
|
(12) |
|
$$(13)$$ |
|
The final output of the GNN is p*l,v,i*. The output of label propagation can be decomposed as follows: |
|
yΛ |
|
LP |
|
v,i |
|
= Pr[The first hit label is i] (15) |
|
= Pr[The random walk from node v hits Vtrain within l steps and the first hit label is i] |
|
+ Pr[The random walk from node v does not hit Vtrain within l steps and the first hit label is i] |
|
(16) |
|
= p*l,v,i* |
|
+ Pr[The random walk from node v does not hit Vtrain within l steps and the first hit label is i] |
|
(17) |
|
As the second term converges to zero as l increases, the GNNs approximate label propagation with |
|
|
|
![4_image_0.png](4_image_0.png) any precision by increasing l. |
|
|
|
We then show that GNNs without LaF cannot represent label propagation. Proposition 4.2. GNNs without LaF cannot approximate label propagation. Specifically, for any series of GNNs {f |
|
(l) |
|
agg}l and fpred, there exists positive Ξ΅*, a connected graph* G = (V, E, X), labelled nodes V*train* β V , |
|
node labels Ytrain β YVtrain and test node v β V \Vtrain, such that for any l*, GNN* (f |
|
(1) |
|
agg*, . . . , f*(l) |
|
agg, fpred) *without* LaF has the error at least Ξ΅*, i.e.,* |
|
|
|
$$\left\|{\hat{\mathbf{y}}}_{v}-{\hat{\mathbf{y}}}_{v}^{L P}\right\|_{1}\geq\varepsilon,$$ |
|
β₯ Ξ΅, (18) |
|
where yΛ |
|
LP *is the output of label propagation for test node* v. |
|
|
|
Proof. We construct a counterexample. Let G be a cycle of four nodes. The nodes are numbered as 1, 2, 3, 4 in the clockwise direction. All the nodes have the same node features x. Let Vtrain = {1, 2} |
|
and Ytrain = [1, 0]β€. Label propagation classifies node 4 as class 1 and node 3 as class 0. However, GNNs without LaF always predict the same label for nodes 3 and 4 since they are isomorphic. |
|
|
|
Therefore, for any GNN without LaF, there is an irreducible error either for node 3 or 4. |
|
|
|
Theorem 4.1 and Proposition 4.2 show that LaF provably enhances the expressive power of GNNs. These results indicate that GNNs with LaF are more powerful than traditional message passing GNNs such as GCNs, GATs, and GINs without LaF. Note that GINs have been considered to be the most expressive message passing GNNs, but GINs cannot represent label propagation without LaF while message passing GNNs with LaF can. This does not lead to a contradiction since the original GINs do not take the label information as input. Put differently, the input domains of the functions differ. These results indicate that it is important to consider what to input to GNNs as well as the architecture of GNNs. |
|
|
|
## 5 Training-Free Graph Neural Networks |
|
|
|
We propose training-free graph neural networks (TFGNNs) based on the analysis in the previous section. TFGNNs can be used without training and can also be improved with optional training. First, we define training-free models. Definition 5.1 (Training-free Model). We say a parametric model is training-free if it can be used without optimizing the parameters. |
|
|
|
It should be noted that nonparametric models are training-free by definition. The real worth of TFGNNs is that it is training-free while it can be improved with optional training. Users can enjoy the best of both worlds of parametric and nonparametric models by choosing the trade-off based on the computational resources for training and the accuracy required. |
|
|
|
![5_image_0.png](5_image_0.png) |
|
|
|
$$(19)$$ |
|
|
|
![5_image_1.png](5_image_1.png) |
|
|
|
$$(20)$$ |
|
$$(\forall v\in V).$$ |
|
$$(21)$$ |
|
|
|
Figure 1: Initialization of TFGNNs. The parameters of the last (1 + |Y|) rows or |Y| rows are initialized by 0 or 1 in a special pattern |
|
The core idea of TFGNNs is to embed label propagation in GNNs by Theorem 4.1. TFGNNs are defined as follows: |
|
|
|
$$h_{v}^{(0)}=[x_{v};{\tilde{y}}_{v}]$$ |
|
h (0) v = [xv; yΛv] (βv β V ), (19) h (l) v = ReLU S (l)h (lβ1) v +1 |N(v)| PuβN(v) V (l)h (lβ1) u(v β Vtrain, l β [L]), ReLU T (l)h (lβ1) v +1 |N(v)| PuβN(v) W(l)h (lβ1) yΛv = softmax(Uh(L) v) (βv β V ). (21) |
|
u(v β Vtest, l β [L]),(20) |
|
The architecture of TFGNNs is standard, i.e., TFGNNs transform the center nodes and carry out mean aggregation from the neighboring nodes. The key to TFGNNs lies in initialization. The parameters are initialized as follows: |
|
|
|
$\mathbf{S}^{(l)}_{-(1+|\mathcal{Y}|)}$:$-(1+|\mathcal{Y}|)=0$ $\mathbf{S}^{(l)}_{-(1+|\mathcal{Y}|)}$:$-(1+|\mathcal{Y}|)$:$=\mathbf{I}_{1+|\mathcal{Y}|}$ $\mathbf{V}^{(l)}_{-(1+|\mathcal{Y}|)}$:$=0$ $\mathbf{T}^{(l)}_{-(1+|\mathcal{Y}|)}$:$=0$ $\mathbf{W}^{(l)}_{-(1+|\mathcal{Y}|)}$:$-(1+|\mathcal{Y}|)=0$ $\mathbf{W}^{(l)}_{-(1+|\mathcal{Y}|)}$:$-(1+|\mathcal{Y}|)$:$=\mathbf{I}_{1+|\mathcal{Y}|}$ $\mathbf{U}_{-}$:$-|\mathcal{Y}|=0$, $\mathbf{U}_{-}$:$-|\mathcal{Y}|$:$=\mathbf{I}_{|\mathcal{Y}|}$ |
|
β(1+|Y|):,:β(1+|Y|) = 0 (βl β [L]), (22) |
|
β(1+|Y|):,β(1+|Y|): = I1+|Y| (βl β [L]), (23) |
|
β(1+|Y|):,:β(1+|Y|) = 0 (βl β [L]), (26) |
|
β(1+|Y|):,β(1+|Y|): = I1+|Y| (βl β [L]), (27) |
|
U:,:*β|Y|* = 0, (28) |
|
U:,β|Y|: = I|Y|, (29) |
|
|
|
β(1+|Y|): = 0 (βl β [L]), (24) |
|
|
|
β(1+|Y|): = 0 (βl β [L]), (25) |
|
|
|
i.e., the parameters of the last (1 + |Y|) rows or |Y| rows are initialized by 0 or 1 in a special pattern (Figure 1). Other parameters are initialized randomly, e.g., by Xavier initialization [11]. The following proposition shows that the initialized TFGNNs approximate label propagation. |
|
|
|
Proposition 5.2. *The initialized TFGNNs approximate label propagation. Specifically,* |
|
|
|
$$\mathbf{h}_{v,-(|{\mathcal{Y}}|-i+1)}^{(L)}=p_{L,v,i}$$ |
|
$$(30)$$ |
|
$$(31)$$ |
|
v,β(*|Y|β*i+1) = p*L,v,i* (30) |
|
holds, where pL,v,i *is defined in Eq. (8), and* |
|
|
|
$$a n d$$ |
|
$$\operatorname*{arg\,max}_{i}{\hat{\mathbf{y}}}_{v i}=\operatorname*{arg\,max}_{i}p_{L,v,i}$$ |
|
i |
|
i |
|
p*L,v,i* (31) |
|
holds, and p*L,v,i* β yΛ |
|
LP |
|
v,i as L β β. |
|
|
|
Proof. By the definitions of TFGNNs, |
|
|
|
$$\mathbf{h}_{v_{i},-|\mathcal{Y}|:}^{(0)}=\begin{cases}\mathbf{y}_{v}&(v\in V_{\text{train}}),\\ \mathbf{0}_{|\mathcal{Y}|}&(v\in V_{\text{test}}),\end{cases}$$ $$\mathbf{h}_{v_{i},-|\mathcal{Y}|:}^{(l)}=\begin{cases}\mathbf{h}_{v_{i},-|\mathcal{Y}|:}^{(l-1)}&(v\in V_{\text{train}},l\in[L]),\\ \frac{1}{|\mathcal{N}(v)|}\sum_{u\in\mathcal{N}(v)}\mathbf{h}_{u_{i},-|\mathcal{Y}|:}^{(l-1)}&(v\in V_{\text{test}},l\in[L]).\end{cases}$$ |
|
$$(32)$$ |
|
$$(33)$$ |
|
This recursion is the same: $\newcommand{\colv}[1]{\left\|}$ . |
|
This recursion is the same as Eqs. (9) - (13). Therefore, $\mathbf{\hat{\theta}}$ |
|
$$\mathbf{h}_{v,-(|{\mathcal{Y}}|-i+1)}^{(L)}=p_{L,v,i}$$ |
|
$$(34)$$ |
|
$${\mathrm{monotone}},$$ |
|
$$(35)$$ |
|
$$\square$$ |
|
v,β(*|Y|β*i+1) = p*L,v,i* (34) |
|
holds. As $U$ picks the last $|\mathcal{Y}|$ |
|
holds. As U picks the last |Y| dimensions, and softmax is monotone, |
|
|
|
$$\operatorname*{arg\,max}_{i}{\hat{\mathbf{y}}}_{v i}=\operatorname*{arg\,max}_{i}p_{L,v,i}$$ |
|
p*L,v,i* (35) |
|
holds. $p_{L,v,i}\to\widehat{\boldsymbol{y}}_{v,i}^{\mathrm{LP}}$ as $L\to\infty$ is shown in the proof of Theorem 4.1. |
|
Therefore, the initialized TFGNNs can be used for transductive node classification as are without training. |
|
|
|
The approximation algorithm of label propagation is seamlessly embedded in the model parameters, and TFGNNs can also be trained as usual GNNs. |
|
|
|
## 6 Experiments 6.1 Experimental Setup |
|
|
|
We use the Planetoid datasets (Cora, CiteSeer, PubMed) [54], Coauthor datasets, and Amazon datasets [42] in the experiments. We use 20 nodes per class for training, 500 nodes for validation, and the rest for testing in the Planetoid datasets following Kipf et al. [20], and use 20 nodes per class for training, 30 nodes per class for validation, and the rest for testing in the Coauthor and Amazon datasets following Shchur et al. [42]. We use GCNs [20] and GATs [45] for the baselines. We use three layered models with the hidden dimension 32 unless otherwise specified. We train all the models with AdamW [25] with learning rate 0.0001 and weight decay 0.01. |
|
|
|
## 6.2 Tfgnns Outperform Existing Gnns In Training-Free Setting |
|
|
|
We compare the performance of TFGNNs with GCNs and GATs in the training-free setting by assessing the accuracy of the models when the parameters are initialized. The results are shown in Table 1. TFGNNs outperform GCNs and GATs in all the datasets. Specifically, both GCNs and GATs are almost random in the training-free setting, while TFGNNs achieve non-trivial accuracy. These results validate that TFGNNs |
|
|
|
Table 1: Node classification accuracy in the training-free setting. The best results are shown in **bold**. |
|
|
|
CS: Coauthor CS, Physics: Coauthor Physics, Computers: Amazon Computers, Photo: Amazon Photo. |
|
|
|
TFGNNs outperform GCNs and GATs in all the datasets. These results indicate that TFGNNs are trainingfree. Note that we use three-layered TFGNNs to make the comparison fair although deeper TFGNNs perform |
|
|
|
better in the training-free setting as we confirm in Section 6.3. |
|
|
|
Cora CiteSeer PubMed CS Physics Computers Photo |
|
|
|
GCNs 0.163 0.167 0.180 0.079 0.101 0.023 0.119 |
|
|
|
GCNs + LaF 0.119 0.159 0.407 0.080 0.146 0.061 0.142 GATs 0.177 0.229 0.180 0.040 0.163 0.058 0.122 |
|
|
|
GATs + LaF 0.319 0.077 0.180 0.076 0.079 0.025 0.044 |
|
|
|
TFGNNs + random initialization 0.149 0.177 0.180 0.023 0.166 0.158 0.090 TFGNNs (proposed) **0.600 0.362 0.413 0.601 0.717 0.730 0.637** |
|
|
|
![7_image_0.png](7_image_0.png) |
|
|
|
Figure 2: Deep TFGNNs perform better in the training-free setting. The x-axis is the number of layers, and the y-axis is the accuracy of the models for the Cora dataset in the training-free setting. These results show that deeper TFGNNs perform better in the training-free setting. |
|
meet the definition 5.1 of training-free models. We can also observe that GCNs, GATs, and TFGNNs do not benefit from LaF in the training-free settings if randomly initialized. These results indicate that both LaF |
|
and the initialization of TFGNNs are important for training-free performance. |
|
|
|
## 6.3 Deep Tfgnns Perform Better In Training-Free Setting |
|
|
|
We confirm that deeper TFGNNs perform better in the training-free setting. We have used three-layered TFGNNs so far to make the comparison fair with existing GNNs. Proposition 5.2 shows that the initialized TFGNNs converge to label propagation as the depth goes to infinity, and we expect that deeper TFGNNs perform better in the training-free setting. Figure 2 shows the accuracy of TFGNNs with different depths for the Cora dataset. We can observe that deeper TFGNNs perform better in the training-free setting until the depth reaches around 10, where the performance saturates. It is noteworthy that GNNs have been known to suffer from the oversmoothing problem [22, 33], and the performance of GNNs degrades as the depth increases. It is interesting that TFGNNs do not suffer from the oversmoothing problem in the trainingfree setting. It should be noted that it does not necessarily mean that deeper models perform better in the optional training mode because the optional training may break the structure introduced by the initialization of TFGNNs and may lead to oversmoothing and/or overfitting. We leave it as a future work to overcome these problems by adopting countermeasures such as initial residual and identity mapping [5], MADReg [3], and DropEdge [35]. |
|
|
|
![8_image_0.png](8_image_0.png) |
|
|
|
Figure 3: TFGNNs converge fast. The x-axis is the number of training iterations, and the y-axis is the validation accuracy of the models for the Cora dataset. These results show that TFGNNs in the optional training mode converge much faster than GCNs. |
|
|
|
![8_image_1.png](8_image_1.png) |
|
|
|
Figure 4: TFGNNs are robust to feature noise. The x-axis is the standard deviation of the Gaussian noise added to the node features, and the y-axis is the accuracy of the models for the Cora dataset. Both models are trained. These results show that TFGNNs are more robust to feature noise than GCNs. |
|
|
|
## 6.4 Tfgnns Converge Fast |
|
|
|
In the following, we investigate the optional training mode of TFGNNs. We train the models with three random seeds and report the average accuracy and standard deviation. We use baseline GCNs without LaF (i.e., the original GCNs) as the baseline. |
|
|
|
First, we confirm that TFGNNs in the optional training mode converge faster than GCNs. We show the training curves of TFGNNs and GCNs for the Cora dataset in Figure 3. TFGNNs converge much faster than GCNs. We hypothesize that TFGNNs converge faster because the initialized TFGNNs are in a good starting point, while GCNs start from a completely random point and require many iterations to reach a good point. We can also observe that fully trained TFGNNs perform on par with GCNs. These results indicate that TFGNNs enjoy the best of both worlds: TFGNNs perform well without training and can be trained faster with optional training. |
|
|
|
## 6.5 Tfgnns Are Robust To Feature Noise |
|
|
|
As TFGNNs use both node features and label information while traditional GNNs rely only on node features, we expect that TFGNNs are more robust to feature noise than traditional GNNs. We confirm this in this section. We add i.i.d. Gaussian noise with standard deviation Ο to the node features and evaluate the accuracy of the models. We train TFGNNs and GCNs with the Cora dataset. The results are shown in Figure 4. TFGNNs are more robust to feature noise especially in high noise regimes where the performance of GCNs degrades significantly. These results indicate that TFGNNs are more robust to i.i.d. Gaussian noise to the node features than traditional GNNs. |
|
|
|
## 7 Related Work 7.1 Labels As Features And Training-Free Gnns |
|
|
|
The most relevant work is Wang et al. [50], who proposed to use node labels in GNNs. This technique was also used in Addanki et al. [1] and analyzied in Wang et al. [51]. The underlying idea is common with LaF, i.e., use of label information as input to transductive GNNs. A similar result as Theorem 4.1 was also shown in [51]. However, the focus is different, and there are different points between this work and theirs. |
|
|
|
We propose the training-free + optional training framework for the first time. The notable characteristics of GNNs are (i) TFGNNs receive both original features and LaF, (ii) TFGNNs can be deployed without training, and (iii) TFGNNs can be improved with optional training. Besides, we provide detailed analysis and experiments including the speed of convergence and noise robustness. Our results provide complementary insights to the existing works. |
|
|
|
Another related topic is graph echo state networks [8, 9, 29], which lead to lightweight models for graph data. |
|
|
|
The key idea is to use randomly initialized fixed weights for aggregation. The main difference is that graph echo state networks still require to train the output layer, while TFGNNs can be used without training. These methods are orthogonal, and it is an interesting direction to combine them to further improve the performance. |
|
|
|
## 7.2 Speeding Up Gnns |
|
|
|
Various methods have been proposed to speed up GNNs to handle large graph data. GraphSAGE [13] |
|
is one of the earliest methods to speed up GNNs. GraphSAGE employs neighbor sampling to reduce the computational cost of training and inference. It samples a fixed number of neighbors for each node and aggregates the features of the sampled neighbors. An alternative sampling method is layer-wise sampling introduced in FastGCN [4]. Huang et al. [16] further improved FastGCN by using an adaptive node sampling technique to reduce the variance of estimators. LADIES [59] combined neighbor sampling and layer-wise sampling to take the best of both worlds. Another approach is to use smaller training graphs. ClusterGCN [6] uses a cluster of nodes as a mini-batch. GraphSAINT [56] samples subgraphs by random walks for each mini-batch. |
|
|
|
It should also be noted that general techniques to speed up neural networks, such as mixed-precision training |
|
[30], quantization [17, 21, 43, 48, 52], and pruning [2, 14] can be applied to GNNs. |
|
|
|
These methods mitigate the training cost of GNNs, but they still require many training iterations. In this paper, we propose training-free GNNs, which can be deployed instantly as soon as the model is initialized. |
|
|
|
Besides, our method can be improved with optional training. In the optional training mode, the speed up techniques mentioned above can be combined with our method to reduce the training time further. |
|
|
|
## 7.3 Expressive Power Of Gnns |
|
|
|
Expressive power (or representation power) means what kind of functional classes a model family can realize. The expressive power of GNNs is an important field of research in its own right. If GNNs cannot represent the true function, we cannot expect GNNs to work well however we train them. Therefore, it is important to elucidate the expressive power of GNNs. Originally, Morris et al. [31] and Xu et al. [53] showed that messagepassing GNNs are at most as powerful as the 1-WL test, and they proposed k-GNNs and GINs, which are as powerful as the k-(set)WL and 1-WL tests, respectively. GINs are the most powerful message-passing GNNs. Sato [38, 39] and Loukas [26] showed that message-passing GNNs are as powerful as a computational model of distributed local algorithms, and they proposed GNNs that are as powerful as port-numbering and randomized local algorithms. Loukas [26] showed that GNNs are Turing-complete under certain conditions |
|
(i.e., with unique node ids and infinitely increasing depths). Some other works showed that GNNs can solve or cannot solve some specific problems, e.g., GNNs can recover the underlying geometry [37], GNNs cannot recognize bridges and articulation points [57]. There are various efforts to improve the expressive power of GNNs by non-message-passing architectures [27, 28, 32]. We refer the readers to survey papers [18, 36] for more details on the expressive power of GNNs. |
|
|
|
We contributed to the field of the expressive power of GNNs by showing that GNNs with LaF are more powerful than GNNs without LaF. Specifically, we showed that GNNs with LaF can represent an important model, label propagation, while GNNs without LaF cannot. It should be emphasized that GINs, the most powerful message-passing GNNs, and Turing-complete GNNs cannot represent label propagation without LaF because they do not have access to the label information label propagation uses, and also noted that GINs traditionally do not use LaF. This result indicates that it is important to consider what to input to the GNNs as well as the architecture of the GNNs for the expressive power of GNNs. This result provides a new insight into the field of the expressive power of GNNs. |
|
|
|
## 8 Limitations |
|
|
|
Our work has several limitations. First, LaF and TFGNNs cannot be applied to inductive settings while most GNNs can. We do not regard this as a negative point. Popular GNNs such as GCNs and GATs are applicable to both transductive and inductive settings and are often used for transductive settings. |
|
|
|
However, this also means that they do not take advantage of transductive-specific structures (those that are not present in inductive settings). We believe that it is important to exploit inductive-specific techniques for inductive settings and transductive-specific techniques (such as LaF) for transductive settings in order to pursue maximum performance. |
|
|
|
Second, TFGNNs cannot be applied to heterophilious graphs, or its performance degrades as TFGNNs are based on label propagation. The same argument mentioned above applies. Relying on homophilious graphs is not a negative point in pursuing maximum performance. It should be noted that LaF may also be exploited in heterophilious settings as well. Developing training-free GNNs for heterophilious graphs based on LaF is an interesting future work. |
|
|
|
Third, we did not aim to achieve the state-of-the-art performance. Exploring the combination of LaF with fancy techniques to achieve state-of-the-art performance is left as future work. |
|
|
|
Finally, we did not explore applications of LaF other than TFGNNs. LaF can help other GNNs in nontraining-free settings as well. Exploring the application of LaF to other GNNs is left as future work. |
|
|
|
## 9 Conclusion |
|
|
|
In this paper, we made the following contributions. |
|
|
|
- We advocated the use of LaF in transductive learning (Section 3). |
|
|
|
- We confirmed that LaF is admissible in transductive learning, but LaF has not been explored in the field of GNNs such as GCNs and GATs. |
|
|
|
- We formally showed that LaF strengthens the expressive power of GNNs (Section 4). |
|
|
|
- We showed that GNNs with LaF can represent label propagation (Theorem 4.1) while GNNs without LaF cannot (Proposition 4.2). |
|
|
|
- We proposed training-free graph neural networks, TFGNNs (Section 5). |
|
|
|
- We showed that TFGNNs defined by Eqs. (19) - (29) meet the requirements of training-free models (Definition 5.1) by showing that the initialized TFGNNs approximate label propagation in Proposition 5.2. |
|
|
|
- We confirmed that TFGNNs outperform existing GNNs in the training-free setting. (Section 6) |
|
- We showed that TFGNNs outperform GCNs and GATs in all of the seven datasets in the training-free setting (Table 1). |
|
|
|
- TFGNNs achieve non-trivial accuracy without training and can be deployed instantly as soon as the model is initialized. These results corroborate that TFGNNs are training-free (Definition 5.1). |
|
|
|
We also note that our idea can be applied to other machine learning models than graph neural networks. We hope that this papers opens the door to a new research direction of training-free neural networks. |
|
|
|
## References |
|
|
|
[1] R. Addanki, P. W. Battaglia, D. Budden, A. Deac, J. Godwin, T. Keck, W. L. S. Li, A. SanchezGonzalez, J. Stott, S. Thakoor, and P. Velickovic. Large-scale graph representation learning with very deep gnns and self-supervision. *arXiv*, 2021. URL https://arxiv.org/abs/2107.09422. |
|
|
|
[2] D. W. Blalock, J. J. G. Ortiz, J. Frankle, and J. V. Guttag. What is the state of neural network pruning? In *Proceedings of Machine Learning and Systems 2020, MLSys*, 2020. |
|
|
|
[3] D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In *Proceedings of the 34th AAAI Conference on* Artificial Intelligence, AAAI, pages 3438β3445, 2020. |
|
|
|
[4] J. Chen, T. Ma, and C. Xiao. FastGCN: Fast learning with graph convolutional networks via importance sampling. In *Proceedings of the 6th International Conference on Learning Representations, ICLR*, 2018. |
|
|
|
[5] M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li. Simple and deep graph convolutional networks. In Proceedings of the 37th International Conference on Machine Learning, ICML, pages 1725β1735, 2020. |
|
|
|
[6] W. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C. Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In *Proceedings of the 25th ACM SIGKDD International* Conference on Knowledge Discovery & Data Mining, KDD, pages 257β266, 2019. |
|
|
|
[7] W. Fan, Y. Ma, Q. Li, Y. He, Y. E. Zhao, J. Tang, and D. Yin. Graph neural networks for social recommendation. In *The Web Conference 2019, WWW*, pages 417β426, 2019. |
|
|
|
[8] C. Gallicchio and A. Micheli. Graph echo state networks. In *International Joint Conference on Neural* Networks, IJCNN, pages 1β8, 2010. |
|
|
|
[9] C. Gallicchio and A. Micheli. Fast and deep graph neural networks. In *Proceedings of the 34th AAAI* |
|
Conference on Artificial Intelligence, AAAI, pages 3898β3905, 2020. |
|
|
|
[10] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. In *Proceedings of the 34th International Conference on Machine Learning, ICML*, |
|
pages 1263β1272, 2017. |
|
|
|
[11] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. |
|
|
|
In *Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS*, pages 249β256, 2010. |
|
|
|
[12] M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In *Proceedings* of the International Joint Conference on Neural Networks, IJCNN, volume 2, pages 729β734, 2005. |
|
|
|
[13] W. L. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, NeurIPS, pages 1024β1034, 2017. |
|
|
|
[14] S. Han, J. Pool, J. Tran, and W. J. Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, NeurIPS, pages 1135β1143, 2015. |
|
|
|
[15] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR |
|
conference on research and development in Information Retrieval, SIGIR, pages 639β648, 2020. |
|
|
|
[16] W. Huang, T. Zhang, Y. Rong, and J. Huang. Adaptive sampling towards fast graph representation learning. In *Advances in Neural Information Processing Systems 31, NeurIPS*, 2018. |
|
|
|
[17] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. G. Howard, H. Adam, and D. Kalenichenko. |
|
|
|
Quantization and training of neural networks for efficient integer-arithmetic-only inference. In *2018* IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pages 2704β2713, 2018. |
|
|
|
[18] S. Jegelka. Theory of graph neural networks: Representation and learning. *arXiv*, abs/2204.07697, 2022. |
|
|
|
[19] J. Jo, S. Lee, and S. J. Hwang. Score-based generative modeling of graphs via the system of stochastic differential equations. In *International Conference on Machine Learning, ICML*, pages 10362β10383, 2022. |
|
|
|
[20] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR, 2017. |
|
|
|
[21] R. Krishnamoorthi. Quantizing deep convolutional networks for efficient inference: A whitepaper. *arXiv*, |
|
abs/1806.08342, 2018. URL http://arxiv.org/abs/1806.08342. |
|
|
|
[22] Q. Li, Z. Han, and X. Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In *Proceedings of the 32nd AAAI Conference on Artificial Intelligence, AAAI*, pages 3538β |
|
3545, 2018. |
|
|
|
[23] Z. Liu, C. Chen, X. Yang, J. Zhou, X. Li, and L. Song. Heterogeneous graph neural networks for malicious account detection. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM, pages 2077β2085, 2018. |
|
|
|
[24] Z. Liu, C. Chen, L. Li, J. Zhou, X. Li, L. Song, and Y. Qi. Geniepath: Graph neural networks with adaptive receptive paths. In *The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI*, pages 4424β4431, 2019. |
|
|
|
[25] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In *7th International Conference* on Learning Representations, ICLR, 2019. |
|
|
|
[26] A. Loukas. What graph neural networks cannot learn: depth vs width. In Proceedings of the 8th International Conference on Learning Representations, ICLR, 2020. |
|
|
|
[27] H. Maron, H. Ben-Hamu, H. Serviansky, and Y. Lipman. Provably powerful graph networks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, pages 2153β2164, 2019. |
|
|
|
[28] H. Maron, H. Ben-Hamu, N. Shamir, and Y. Lipman. Invariant and equivariant graph networks. In Proceedings of the 7th International Conference on Learning Representations, ICLR, 2019. |
|
|
|
[29] A. Micheli and D. Tortorella. Addressing heterophily in node classification with graph echo state networks. *Neurocomputing*, 550:126506, 2023. |
|
|
|
[30] P. Micikevicius, S. Narang, J. Alben, G. F. Diamos, E. Elsen, D. GarcΓa, B. Ginsburg, M. Houston, O. Kuchaiev, G. Venkatesh, and H. Wu. Mixed precision training. In 6th International Conference on Learning Representations, ICLR, 2018. |
|
|
|
[31] C. Morris, M. Ritzert, M. Fey, W. L. Hamilton, J. E. Lenssen, G. Rattan, and M. Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In *Proceedings of the 33rd AAAI Conference on* Artificial Intelligence, AAAI, pages 4602β4609, 2019. |
|
|
|
[32] R. L. Murphy, B. Srinivasan, V. A. Rao, and B. Ribeiro. Relational pooling for graph representations. |
|
|
|
In *Proceedings of the 36th International Conference on Machine Learning, ICML*, pages 4663β4673, 2019. |
|
|
|
[33] K. Oono and T. Suzuki. Graph neural networks exponentially lose expressive power for node classification. In *Proceedings of the 8th International Conference on Learning Representations, ICLR*, 2020. |
|
|
|
[34] N. Park, A. Kan, X. L. Dong, T. Zhao, and C. Faloutsos. Estimating node importance in knowledge graphs using graph neural networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD, pages 596β606, 2019. |
|
|
|
[35] Y. Rong, W. Huang, T. Xu, and J. Huang. Dropedge: Towards deep graph convolutional networks on node classification. In *8th International Conference on Learning Representations, ICLR*, 2020. |
|
|
|
[36] R. Sato. A survey on the expressive power of graph neural networks. *arXiv*, abs/2003.04078, 2020. URL |
|
http://arxiv.org/abs/2003.04078. |
|
|
|
[37] R. Sato. Graph neural networks can recover the hidden features solely from the graph structure. In International Conference on Machine Learning, ICML, pages 30062β30079, 2023. |
|
|
|
[38] R. Sato, M. Yamada, and H. Kashima. Approximation ratios of graph neural networks for combinatorial problems. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, pages 4083β4092, 2019. |
|
|
|
[39] R. Sato, M. Yamada, and H. Kashima. Random features strengthen graph neural networks. In *Proceedings of the 2021 SIAM International Conference on Data Mining, SDM*, pages 333β341, 2021. |
|
|
|
[40] R. Sato, M. Yamada, and H. Kashima. Constant time graph neural networks. *ACM Trans. Knowl. Discov. Data*, 16(5):92:1β92:31, 2022. doi: 10.1145/3502733. URL https://doi.org/10.1145/3502733. |
|
|
|
[41] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. *IEEE Trans. Neural Networks*, 20(1):61β80, 2009. doi: 10.1109/TNN.2008.2005605. URL |
|
https://doi.org/10.1109/TNN.2008.2005605. |
|
|
|
[42] O. Shchur, M. Mumme, A. Bojchevski, and S. GΓΌnnemann. Pitfalls of graph neural network evaluation. |
|
|
|
arXiv, 2018. |
|
|
|
[43] X. Sun, N. Wang, C. Chen, J. Ni, A. Agrawal, X. Cui, S. Venkataramani, K. E. Maghraoui, V. Srinivasan, and K. Gopalakrishnan. Ultra-low precision 4-bit training of deep neural networks. In *Advances in Neural* Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS, 2020. |
|
|
|
[44] Y. Tian, H. Song, Z. Wang, H. Wang, Z. Hu, F. Wang, N. V. Chawla, and P. Xu. Graph neural prompting with large language models. In Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI, pages 19080β19088, 2024. |
|
|
|
[45] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. LiΓ², and Y. Bengio. Graph attention networks. |
|
|
|
In *Proceedings of the 6th International Conference on Learning Representations, ICLR*, 2018. |
|
|
|
[46] D. Wang, Y. Qi, J. Lin, P. Cui, Q. Jia, Z. Wang, Y. Fang, Q. Yu, J. Zhou, and S. Yang. A semi-supervised graph attentive network for financial fraud detection. In 2019 IEEE International Conference on Data Mining, ICDM, pages 598β607, 2019. |
|
|
|
[47] H. Wang, M. Zhao, X. Xie, W. Li, and M. Guo. Knowledge graph convolutional networks for recommender systems. In *The World Wide Web Conference, WWW*, pages 3307β3313, 2019. |
|
|
|
[48] N. Wang, J. Choi, D. Brand, C. Chen, and K. Gopalakrishnan. Training deep neural networks with 8-bit floating point numbers. In *Advances in Neural Information Processing Systems 31: Annual Conference* on Neural Information Processing Systems 2018, NeurIPS, pages 7686β7695, 2018. |
|
|
|
[49] X. Wang, X. He, Y. Cao, M. Liu, and T. Chua. KGAT: knowledge graph attention network for recommendation. In *Proceedings of the 25th ACM SIGKDD International Conference on Knowledge* Discovery & Data Mining, KDD, pages 950β958, 2019. |
|
|
|
[50] Y. Wang, J. Jin, W. Zhang, Y. Yu, Z. Zhang, and D. Wipf. Bag of tricks for node classification with graph neural networks. *arXiv preprint arXiv:2103.13355*, 2021. |
|
|
|
[51] Y. Wang, J. Jin, W. Zhang, Y. Yang, J. Chen, Q. Gan, Y. Yu, Z. Zhang, Z. Huang, and D. Wipf. Why propagate alone? parallel use of labels and features on graphs. In The Tenth International Conference on Learning Representations, ICLR, 2022. |
|
|
|
[52] H. Wu, P. Judd, X. Zhang, M. Isaev, and P. Micikevicius. Integer quantization for deep learning inference: Principles and empirical evaluation. *arXiv*, abs/2004.09602, 2020. URL https://arxiv. |
|
|
|
org/abs/2004.09602. |
|
|
|
[53] K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? In *Proceedings* of the 7th International Conference on Learning Representations, ICLR, 2019. |
|
|
|
[54] Z. Yang, W. W. Cohen, and R. Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In *Proceedings of the 33rd International Conference on Machine Learning, ICML*, pages 40β48, 2016. |
|
|
|
[55] R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec. Graph convolutional neural networks for web-scale recommender systems. In *Proceedings of the 24th ACM SIGKDD International* Conference on Knowledge Discovery & Data Mining, KDD, pages 974β983, 2018. |
|
|
|
[56] H. Zeng, H. Zhou, A. Srivastava, R. Kannan, and V. K. Prasanna. Graphsaint: Graph sampling based inductive learning method. In *8th International Conference on Learning Representations, ICLR*, 2020. |
|
|
|
[57] B. Zhang, S. Luo, L. Wang, and D. He. Rethinking the expressive power of gnns via graph biconnectivity. |
|
|
|
In *The Eleventh International Conference on Learning Representations, ICLR*, 2023. |
|
|
|
[58] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. 2002. [59] D. Zou, Z. Hu, Y. Wang, S. Jiang, Y. Sun, and Q. Gu. Layer-dependent importance sampling for training deep and large graph convolutional networks. In *Advances in Neural Information Processing Systems* 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, pages 11247β11256, 2019. |