RedTachyon
commited on
Commit
•
cc42420
1
Parent(s):
c679409
Upload folder using huggingface_hub
Browse files- 4n9itvGILv/4_image_0.png +3 -0
- 4n9itvGILv/4n9itvGILv.md +434 -0
- 4n9itvGILv/4n9itvGILv_meta.json +25 -0
4n9itvGILv/4_image_0.png
ADDED
Git LFS Details
|
4n9itvGILv/4n9itvGILv.md
ADDED
@@ -0,0 +1,434 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Fair Evaluation Of Graph Markov Neural Networks
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Graph Markov Neural Networks (GMNN) have recently been proposed to improve regular graph neural networks (GNN) by including label dependencies into the semi-supervised node classification task. GMNNs do this in a theoretically principled way and use three kinds of information to predict labels. Just like ordinary GNNs, they use the node features and the graph structure but they moreover leverage information from the labels of neighboring nodes to improve the accuracy of their predictions. In this paper, we introduce a new dataset named *WikiVitals* which contains a graph of 48k mutually referred Wikipedia articles classified into 32 categories and connected by 2.3M edges. Our aim is to rigorously evaluate the contributions of three distinct sources of information to the prediction accuracy of GMNN for this dataset: the content of the articles, their connections with each other and the correlations among their labels. For this purpose we adapt a method which was recently proposed for performing fair comparisons of GNN performance using an appropriate randomization over partitions and a clear separation of model selection and model assessment.
|
8 |
+
|
9 |
+
## 1 Introduction
|
10 |
+
|
11 |
+
Graph neural networks (GNN) (Yang et al., 2016; Kipf & Welling, 2017; Defferrard et al., 2016) have become a tool of choice when modeling datasets whose observations are not i.i.d. but are comprised of entities interconnected according to a graph of relations. They can be used either for graph classification, like molecule classification (Dobson & Doig, 2003; Borgwardt et al., 2005), or for node classification, like document classification in a citation network (Sen et al., 2008).
|
12 |
+
|
13 |
+
The most common task is certainly semi-supervised node classification in which unlabeled nodes of a given subset are to be classified using a distinct subset of labeled nodes, the train set, from the same graph (Kipf &
|
14 |
+
Welling, 2017; Defferrard et al., 2016). Inductive classification on the other hand refers to the most common setting in machine learning in which nodes to be labeled are not known ahead of time (Hamilton et al.,
|
15 |
+
2017).
|
16 |
+
|
17 |
+
A number of architectures have been proposed over the years which deal with specific issues occurring with GNNs. Some combat over-smoothing (which is the tendency for deep GNNs to predict the same labels for all nodes) (Klicpera et al., 2018), some deal with assortativity or heterophily (which refers to situations in which neighboring nodes are likely to have different labels) (Zhu et al., 2020; 2021; Bo et al., 2021) and others still try to learn the connection weights from data using an appropriate attention mechanism (Veličković et al., 2018). Despite their diversity, these models all have one important shortcoming. Namely they assume that labels can be predicted independently for each node in the graph. In other words they neglect label dependencies altogether. More recently Graph Markov Neural Networks (GMNN) (Qu et al., 2019) were introduced as genuine probabilistic models which include label correlations in graphs by combining the strength of GNNs and those of conditional random fields (CRF) while avoiding their limitations. GMNNs are the models we shall focus on in this work. The accuracy of the GMNN model was evaluated for node classification and link prediction tasks in Qu et al. (2019) on the classical benchmark datasets Cora, Pubmed and Citeseer (Sen et al., 2008) using the public splits defined in Yang et al. (2016). Under these settings a clear improvement was demonstrated when comparing the GMNN model to existing baselines that do not account for label dependencies. However, as a number of recent works (Shchur et al., 2018; Errica et al., 2020) have pointed out, a fair evaluation of the performance of GNNs requires a procedure which performs a systematic randomization over train-validationtest set partitions and makes clear separation between model selection and model assessment.
|
18 |
+
|
19 |
+
Our aim in this paper is to subject GMNN to such a rigorous performance analysis on a new, relatively large graph of documents with topical labels named *WikiVitals* that we created for that purpose. In a first step we shall rigorously evaluate the effect on the accuracy of a classifier of leveraging the graph structure.
|
20 |
+
|
21 |
+
This is done by comparing a basic GNN with a graph agnostic baseline such as a MLP. In a second step we shall estimate the increase in accuracy that results from taking into account label correlations using a GMNN on top of a basic GNN. For completeness we also perform the same thorough analysis on the classical benchmark datasets Cora, Citeseer and Pubmed.
|
22 |
+
|
23 |
+
In summary, our contributions1 are:
|
24 |
+
- We introduce a new dataset of interconnected documents named *WikiVitals* extracted from the English Wikipedia. Compared to the classical benchmark datasets this is a relatively large graph comprising 48k nodes classified into 32 categories and connected by 2.3M edges.
|
25 |
+
|
26 |
+
- We apply the fair comparison procedure proposed in Errica et al. (2020) to a GMNN which is a sophisticated node classification model. So far only graph classification models had been evaluated in this manner.
|
27 |
+
|
28 |
+
- We evaluate the respective contributions to the accuracy of classifying *WikiVitals* articles when first including the graph structure information using a common GNN and next when leveraging the label correlations information using a GMNN model on top of that GNN.
|
29 |
+
|
30 |
+
## 2 Related Work 2.1 Modelling Label Dependency In Gnns
|
31 |
+
|
32 |
+
Prior to the recent advent of GNNs a number of works had attempted to include label dependencies using various heuristics. Label propagation is such an early attempt where a cost function balances the penalty for predicting the wrong labels with the requirement that node labels should vary smoothly (Zhou et al.,
|
33 |
+
2003; Zhu, 2005).
|
34 |
+
|
35 |
+
Dataset specific methods have also been proposed. As far as classifying Wikipedia articles is concerned, authors in Viard et al. (2020) use a simple GNN whose weights are empirically adjusted depending on the similarity of the labels of neighboring nodes. Although these approaches had some empirical success (Huang et al., 2021), the lack of a sound probabilistic foundation makes it difficult to analyze why they fail or succeed.
|
36 |
+
|
37 |
+
In particular they do not clearly distinguish the contributions of the node features, the graph structure and the label correlations to the prediction accuracy. In our work we decided to avoid using topological node features like node degree, betweenness or assortativity (Newman, 2003; Blondel et al., 2008; Newman, 2005) to make this distinction clearer. GNNs are a good fit for finding distributed node representations that merge the information supplied by the node features, like the content of a document for instance, with the local structure of the graph in the vicinity of that node. Each such representation is then used for predicting the label for that node independently of those of the other nodes. CRFs on the other hand come in handy for prescribing scores for arbitrary combinations of labels. However performing exact inference is hard due to the trouble of computing the partition function. GMNN propose an elegant solution to this conundrum by using two ordinary GNNs which are coupled when trained with the Expectation Maximization (EM) algorithm (see section 3.1). Their performance was evaluated in Qu et al. (2019) in the usual way using public partitions of classical benchmarks 1Code and data are available at https://drive.google.com/file/d/12G_Jx-Wpuko-2KgVfVfS6ZiLK8idW31V/view?usp=
|
38 |
+
sharing
|
39 |
+
(Sen et al., 2008) but without accounting for the robustness of this evaluation when using different splits which is essential for a fair evaluation (Errica et al., 2020; Shchur et al., 2018). This is one of our goals. We also evaluate the contributions to the classification accuracy of the three sources of information mentioned above, namely the content of the articles, their interconnections and the correlations between the labels of connected articles.
|
40 |
+
|
41 |
+
## 2.2 Evaluating Performance Of Gnns
|
42 |
+
|
43 |
+
As the authors of Shchur et al. (2018) point out, the evaluations of GNN models are almost never conducted in a rigorous manner. On the one hand, many experiments are not replicable due to the lack of a precise definition of the evaluation process. On the other hand, they argue that using a single split, usually the one defined in the paper that introduces a new benchmark dataset, is insufficient to guarantee the existence of a significant difference between the accuracy of two competing GNN architectures. The authors thus suggest standardizing the choice of hyperparameters and randomizing over many train-validation-test splits.
|
44 |
+
|
45 |
+
Then they search for a set of hyperparameters that optimizes the average performance over those splits.
|
46 |
+
|
47 |
+
Surprisingly, they find that the simplest architectures like GCNs (Kipf & Welling, 2017; Defferrard et al.,
|
48 |
+
2016) often perform better for the semi-supervised node classification task than the more sophisticated models (Veličković et al., 2018; Monti et al., 2017). In our work we follow a still more rigorous accuracy assessment that was originally proposed in Errica et al.
|
49 |
+
|
50 |
+
(2020) as a SOTA evaluation procedure for the graph classification task. For a given model we search for the best hyperparameters on a per split basis and then average the accuracy estimations of those optimized models over splits. This allows for a fair assessment when comparing two models in the sense that it guarantees that a practitioner who randomly chooses a split, trains her model on the train set, optimizes its hyperparameters on the validation set and estimates the accuracy on the test set will obtain an estimation that is truly reliable for comparing models such as an MLP, a GCN or a GMNN.
|
51 |
+
|
52 |
+
## 2.3 Classifying Wikipedia Articles
|
53 |
+
|
54 |
+
Wikipedia articles provide rich textual content from which many informative n-grams can be extracted in order to build vector representations of the articles. The mutual hyperlinks define a natural graph structure where articles are the nodes of the graph. In this way, several datasets have been created from Wikipedia and are being used to evaluate various GNN architectures, including Squirrel and Chameleon (Rozemberczki et al., 2021)2. This is also the case for the *WikiVitals* dataset we introduce in this article.
|
55 |
+
|
56 |
+
The labeling of Wikipedia articles can leverage various sources of information. The labels of Squirrel and Chameleon for instance are based on monthly traffic data (acquired through the metadata of the articles)
|
57 |
+
and correspond to an artificial segmentation into 3 or 5 categories (Bo et al., 2021). In Viard et al. (2020),
|
58 |
+
the labeling is based on a collection of labels external to Wikipedia. None of these datasets however exploit a thematic classification resulting from a consensus among Wikipedia editors as does the list of vital articles of Wikipedia3. This is the data we used to label the nodes of our *WikiVitals* dataset (see section 4.2). This classification of vital articles, where each document is associated with a unique label, is not exempt of arbitrariness however. Indeed, the assignment of an article to one category or another can sometimes be ambiguous. Furthermore, this classification is imbalanced and contains categories with very few representatives.
|
59 |
+
|
60 |
+
A common feature of Wikipedia datasets (Squirrel, Chameleon as well as *WikiVitals*) is that they are more disassortative (Newman, 2003) than classical graph datasets4. This makes them particularly interesting as benchmarks for a node classification task, as basic models like GCNs show their limits in such disassortative contexts (Bo et al., 2021). Some recent models like H2GCN or FAGCN have been proposed to overcome this problem and show better performance in those contexts (Bo et al., 2021; Zhu et al., 2020).
|
61 |
+
|
62 |
+
2http://snap.stanford.edu/data/wikipedia-article-networks.html 3https://en.wikipedia.org/wiki/Wikipedia:Vital_articles/Level/5 4The notion of heterophily is also commonly used and generally means that a small proportion of edges connects nodes sharing the same label.
|
63 |
+
|
64 |
+
## 3 Adapting The Fair Comparison Method To Gmnn 3.1 Training A Gmnn
|
65 |
+
|
66 |
+
Before delving into the specifics of our evaluation process let us recall the definition of GMNN model. We use the same notations as in Qu et al. (2019) and refer to this work for a thorough justification of the training procedure we sketch below. We consider a graph G = (*V, E,* xV ) where V denotes the set of nodes, E the set of edges and xV := {xn}n∈V the set of features associated to each node n. We assume that we are given the one-hot encoded labels (for K categories) yL := {yn}n∈L for the nodes in a subset L ⊂ V
|
67 |
+
and the features xV of all nodes. The task we consider is the prediction of the labels yU of the remaining unlabeled nodes in U = V \ L. The GMNN model does two things. First, it specifies a model for the joint probability pϕ(yL, yU |xV ) compatible with a CRF describing correlations between neighboring nodes.
|
68 |
+
|
69 |
+
Second, it describes a practical training procedure, based on the EM algorithm, for finding the parameters ϕ which maximize a variational lower bound on the marginal likelihood pϕ(yL|xV ) over the observed labels which we quickly summarize.
|
70 |
+
|
71 |
+
Training a GMNN requires defining two ordinary GNNs. The first one, denoted by GNNϕ, where ϕ is the set of its parameters, describes the conditional distribution pϕ(yn|yNB(n), xV ) over individual node labels yn given the labels of the neighboring nodes denoted by NB(n) and the node features xV . It is specified in the usual manner by a softmax applied on a d-dimensional node embedding hϕ,n, read off from the last layer of GNNϕ, multiplied by a K × d learnable matrix Wϕ:
|
72 |
+
|
73 |
+
$$p_{\phi}(\mathbf{y}_{n}|\mathbf{y}_{\mathrm{NB}(n)},\mathbf{x}_{V})=\mathrm{Cat}(\mathbf{y}_{n}|\mathrm{softmax}(W_{\phi}\mathbf{h}_{\phi,n})).$$
|
74 |
+
pϕ(yn|yNB(n), xV ) = Cat(yn|softmax(Wϕhϕ,n)). (1)
|
75 |
+
A second GNN, that we denote by GNNθ, defines a mean-field variational distribution meant to approximate the posterior pϕ(yU |yL, xV ) in the EM algorithm. It is defined nodewise in a similar way:
|
76 |
+
|
77 |
+
$$\left(1\right)$$
|
78 |
+
|
79 |
+
$$q_{\theta}(\mathbf{y}_{n}|\mathbf{x}_{V})=\mathrm{Cat}(\mathbf{y}_{n}|\mathrm{softmax}(W_{\theta}\mathbf{h}_{\theta,n})).$$
|
80 |
+
$$\left(2\right)$$
|
81 |
+
|
82 |
+
qθ(yn|xV ) = Cat(yn|softmax(Wθhθ,n)). (2)
|
83 |
+
Intuitively GNNθ is a model that completely neglects correlations among labels. These predictions are then adjusted by GNNϕ which accounts for the correlations between the labels of neighboring nodes, these in turn will correct GNNθ within an EM cyclic training procedure. The training process uses the following two objective functions. One is for updating θ while holding ϕ fixed:
|
84 |
+
|
85 |
+
$$O_{\theta}=\sum_{n\in U}\mathbb{E}_{p_{\theta}(\mathbf{y}_{n}|\hat{\mathbf{y}}_{\mathrm{NB}(n)},\mathbf{x}_{V})}[\log q_{\theta}(\mathbf{y}_{n}|\mathbf{x}_{V})]+\sum_{n\in L}\log q_{\theta}(\mathbf{y}_{n}|\mathbf{x}_{V}),$$
|
86 |
+
|
87 |
+
where yˆn denotes the ground truth label yn if n ∈ L and is sampled from qθ(yn|xV ) if n ∈ U. Using the same notations, the other objective function used for optimizing ϕ while holding θ fixed is:
|
88 |
+
|
89 |
+
$$\left({\boldsymbol{3}}\right)$$
|
90 |
+
$$O_{\phi}=\sum_{n\in V}\log p_{\phi}(\hat{\mathbf{y}}_{n}|\hat{\mathbf{y}}_{\text{NB}(n)},\mathbf{x}_{V}).\tag{1}$$
|
91 |
+
$$\left(4\right)$$
|
92 |
+
|
93 |
+
The first step of training GMNN is to initialize qθ by maximizing the last term in (3) for θ. This corresponds to an ordinary GNN trained without accounting for label correlations. The accuracy of this initial qθ model will thus provide a baseline to compare with the full GMNN model. Second, fix θ and maximize ϕ in (4),
|
94 |
+
this is the M-step. At last, optimize (3) for θ while holding ϕ fixed, this is the E-step. Repeat the M and E
|
95 |
+
step until convergence. Experience shows that qθ is consistently a better predictor than pϕ (Qu et al., 2019).
|
96 |
+
|
97 |
+
## 3.2 Fair Comparison Of Gnns
|
98 |
+
|
99 |
+
Recall that our main goal is to rigorously ascertain under which circumstances a GMNN model architecture, which was designed to leverage label correlations, has a higher accuracy when used for classifying articles from the *WikiVitals* dataset than a correlation agnostic model like a GCN or a FAGCN (Bo et al., 2021).
|
100 |
+
|
101 |
+
We also wish to compare the GCN or FAGCN models with a structure agnostic baseline like an MLP for this same dataset.
|
102 |
+
|
103 |
+
![4_image_0.png](4_image_0.png)
|
104 |
+
|
105 |
+
Figure 1: The fair evaluation procedure for GNN's and its adaptation for GMNN uses k train/validation/test splits D
|
106 |
+
(i)
|
107 |
+
in−train, D
|
108 |
+
(i)
|
109 |
+
valid, D
|
110 |
+
(i)
|
111 |
+
test which are created from k stratified folds Fi as explained in section 3.2.
|
112 |
+
|
113 |
+
A crude evaluation would proceed by partitioning the available dataset of labeled *WikiVitals* articles D =
|
114 |
+
((x1, y1)*, . . . ,*(xN , yN )) into three disjoint sets: a train set Dtrain, a validation set Dvalid for selecting the optimal hyperparameters γ
|
115 |
+
∗ among a set Γ and a test set Dtest to evaluate the accuracy of that optimal model. Unfortunately such a simple procedure was shown to be so unstable that changing the partition could totally scramble relative ranking of various GNN architectures (Errica et al., 2020; Shchur et al., 2018).
|
116 |
+
|
117 |
+
To perform reliable comparisons we shall follow the best practices described in Errica et al. (2020). The main requirement is to clearly separate model assessment from model selection.
|
118 |
+
|
119 |
+
Model assessment uses a k-fold cross validation procedure. The dataset D is first split into k disjoint stratified5folds F1*, . . . ,* Fk. Then k different train and test sets are defined as:
|
120 |
+
|
121 |
+
$${\mathcal{D}}_{\mathrm{train}}^{(i)}:=\bigcup_{j\neq i}{\mathcal{F}}_{j},\quad{\mathcal{D}}_{\mathrm{test}}^{(i)}:={\mathcal{F}}_{i},\quad i=1,\ldots,k.$$
|
122 |
+
|
123 |
+
Each train set is itself split into an inner train set and a validation set:
|
124 |
+
|
125 |
+
$${\mathcal{D}}_{\mathrm{train}}^{(i)}:={\mathcal{D}}_{\mathrm{in-train}}^{(i)}\cup{\mathcal{D}}_{\mathrm{valid}}^{(i)},\quad i=1,\ldots,k.$$
|
126 |
+
|
127 |
+
Model selection (see below) is performed separately for each D
|
128 |
+
(i)
|
129 |
+
train. This results in a set of hyperparameters γ
|
130 |
+
(i) which is optimal for D
|
131 |
+
(i)
|
132 |
+
train. The model is then trained with these optimal hyperparameters γ
|
133 |
+
(i) on D
|
134 |
+
(i)
|
135 |
+
in−train using D
|
136 |
+
(i)
|
137 |
+
valid to implement early stopping. Actually, for each fold i, the test accuracy is averaged over r training runs with different random initializations of the weights to smooth out any possible bad configuration. The average of these test accuracies over the k folds makes our final assessment of a model architecture.
|
138 |
+
|
139 |
+
Model selection corresponds to choosing an optimal set of hyperparameters. It is performed, separately for each D
|
140 |
+
(i)
|
141 |
+
train. More precisely, the model is trained on D
|
142 |
+
(i)
|
143 |
+
in−train using D
|
144 |
+
(i)
|
145 |
+
valid as a holdout set for selecting the hyperparameters γ
|
146 |
+
(i) which maximize the accuracy among a set Γ of configurations.
|
147 |
+
|
148 |
+
## 3.3 Adaptation To Gmnn
|
149 |
+
|
150 |
+
In order to evaluate GMNN using the fair evaluation principle described above, we must select for each split j a pair γ
|
151 |
+
(j):= (α
|
152 |
+
(j), β(j)) of optimal hyperparameters α
|
153 |
+
(j)for GNNθ and β
|
154 |
+
(j)for GNNϕ respectively. We 5In the context of this paper, stratified sampling of nodes means that the distribution of labels in the sampled set is approximately the same as in the set of nodes from which it was sampled.
|
155 |
+
|
156 |
+
follow the same strategy as Qu et al. (2019) namely we compute hyperparameters α
|
157 |
+
(j)for GNNθ first and then set β
|
158 |
+
(j) = α
|
159 |
+
(j)for the hyperparameters of GNNϕ. The model assessment phase remains unchanged.
|
160 |
+
|
161 |
+
For each split j the set of hyperparameters γ
|
162 |
+
(j) = (α
|
163 |
+
(j), β(j)) is used to compute the test accuracy of GMNN
|
164 |
+
using r random weight initializations. The fair evaluation model adapted to GMNN is presented in figure 1.
|
165 |
+
|
166 |
+
Note that performing a fair evaluation of the model after completion of the initial training of GNNθ and before entering the EM optimization corresponds to a fair evaluation of a plain GNN which thus requires no additional computation.
|
167 |
+
|
168 |
+
## 4 Experiment 4.1 Datasets And Settings
|
169 |
+
|
170 |
+
Datasets: For our main experiment we created *WikiVitals*, a novel sparse and disassortative documentdocument graph created from the English Wikipedia level 5 vital articles in April 2022 (Wikipedia is under CC BY-SA license). Node features xn correspond to the presence or absence of 4000 n-grams in the summary section, title and headings of the article. The edges are the hyperlinks between articles.
|
171 |
+
|
172 |
+
Each node of the graph has been associated to a single label (among 32) corresponding to an intermediate level in a hierarchy of topics co-constructed by Wikipedia editors. More information on this new dataset as well as statistics on all datasets can be found in table 1 and section 4.2. For completeness, we also performed a fair evaluation on the three well-known assortative citation network datasets: Cora, Citeseer, and Pubmed. Edges in these networks represent citations between two scientific articles, node features xn are a bag-of-words vector of the articles and labels yn correspond to the fields of the articles. For all datasets, we treat the graphs as undirected.
|
173 |
+
|
174 |
+
General setup: All baseline models (MLP, GCN and FAGCN) were reimplemented using PyTorch with two layers (input representations → hidden layer → output layer). For all models, L
|
175 |
+
2-regularization is performed on all layers, dropout is applied on input data and on all layers. For GCN and FAGCN, we used the so-called *renormalization trick* of the adjacency matrix (Kipf & Welling, 2017). For FAGCN, the number of propagations (Bo et al., 2021) is set to 2 in order to limit the aggregation of information to nodes located at a maximum distance of 2. For GMNN we use the annealing sampling method with factor set to 0.1 (Qu et al., 2019), the number of EM-loops is set to 10. Both label predictions yˆn, made with GNNθ, and node features xV are used to train GNNϕ as defined in (4).
|
176 |
+
|
177 |
+
We use the same training procedure for all models. For all datasets, node features are binarized and then normalized (L
|
178 |
+
1-norm) before training. We used the Adam optimizer (Kingma & Ba, 2015) with default parameters and no learning rate decay, the same maximum number of training epochs, an early stopping criterion and a patience hyperparameter (see section 4.3 for more details). Validation accuracy is evaluated at the end of each epoch. All model parameters (convolutional kernel coefficients for FAGCN, weight matrices for all models) are initialized and optimized simultaneously (weights are initialized according to Glorot and biases initialized to zero). In all cases we use full-batch training (using all nodes in the training set every epoch).
|
179 |
+
|
180 |
+
Fair evaluation setup: During the assessment phase, we perfom r = 20 trainings for each of the k = 10 splits. Best configurations of hyperparameters α
|
181 |
+
(j)for GNNθ are calculated for each split j, and next we set hyperparameters β
|
182 |
+
(j) = α
|
183 |
+
(j)for GNNϕ.
|
184 |
+
|
185 |
+
For each dataset, we followed the best practices advocated in Errica et al. (2020) and summarized in section 3.2 to pre-calculate stratified splits (D
|
186 |
+
(j)
|
187 |
+
in−train, D
|
188 |
+
(j)
|
189 |
+
valid, D
|
190 |
+
(j)
|
191 |
+
test), j = 1*, . . . , k* of the entire set of nodes with respective ratios of 81%, 9% and 10%. In the sequel the sets D
|
192 |
+
(j)
|
193 |
+
in−train will be referred to as dense training sets.
|
194 |
+
|
195 |
+
In addition, we have created two other sets of splits whose train sets are sparse, first to allow a convenient comparison with previous work which actually use such train sets (Yang et al., 2016), second to enlarge the scope of the methods tested in this article. As a reminder, the evaluation of GNNs as well as GMNN
|
196 |
+
|
197 |
+
| Dataset | Assortativity | #Nodes | #Edges | #Categories | #Features |
|
198 |
+
|------------|-----------------|----------|-----------|---------------|-------------|
|
199 |
+
| Cora | 0.771 | 2,708 | 5,429 | 7 | 1,433 |
|
200 |
+
| Citeseer | 0.675 | 3,327 | 4,732 | 6 | 3,703 |
|
201 |
+
| Pubmed | 0.686 | 19,717 | 44,338 | 3 | 500 |
|
202 |
+
| WikiVitals | 0.204 | 48,512 | 2,297,782 | 32 | 4,000 |
|
203 |
+
|
204 |
+
Table 1: Statistics of document graphs
|
205 |
+
for Cora, Citeseer and Pubmed was classically performed using the Planetoid splits (Yang et al., 2016) of these datasets or similarly constructed splits composed of 20 nodes per category randomly selected in the whole dataset (Shchur et al., 2018; Bo et al., 2021; Qu et al., 2019). To construct splits with sparse train sets we independently extracted two subsets D
|
206 |
+
(j)
|
207 |
+
sparse−balanced and D
|
208 |
+
(j)
|
209 |
+
sparse−stratified from each D
|
210 |
+
(j)
|
211 |
+
in−train, j =
|
212 |
+
1*, . . . , k*. Each contains 20 ∗ K nodes (where K is the number of categories). Each D
|
213 |
+
(j)
|
214 |
+
sparse−balanced is constructed by selecting 20 nodes of each category from D
|
215 |
+
(j)
|
216 |
+
in−train. In the sequel these sets will be referred to as sparse balanced train sets in the sense that each category is represented equally in each of them.
|
217 |
+
|
218 |
+
Each D
|
219 |
+
(j)
|
220 |
+
sparse−stratified is constructed by selecting nodes from D
|
221 |
+
(j)
|
222 |
+
in−train in a stratified way. We shall denote these sets as sparse stratified train sets. Thus we have k splits of each dataset with sparse balanced train sets (D
|
223 |
+
(j)
|
224 |
+
sparse−balanced, D
|
225 |
+
(j)
|
226 |
+
valid, D
|
227 |
+
(j)
|
228 |
+
test), j = 1*, . . . , k* and k splits of each dataset with sparse stratified train sets
|
229 |
+
(D
|
230 |
+
(j)
|
231 |
+
sparse−stratified, D
|
232 |
+
(j)
|
233 |
+
valid, D
|
234 |
+
(j)
|
235 |
+
test), j = 1*, . . . , k*. The fair evaluation method presented in section 3.3 can be easily adapted to splits with sparse train sets replacing the inner-train sets in every training phases.
|
236 |
+
|
237 |
+
Rigorous model selection phases imply performing extensive grid searches over the hyperparameter set Γ,
|
238 |
+
which is computationally very expensive. In practice we have implemented our own evolutionary grid search algorithm which discovers suitable configurations of hyperparameters by using the validation accuracy to guide the evolution. Such an algorithm computes a suitable configuration by exploring a small portion of Γ
|
239 |
+
(Young et al., 2015), see section 4.3 for more details.
|
240 |
+
|
241 |
+
## 4.2 Creation Of The Wikivitals Dataset
|
242 |
+
|
243 |
+
WikiVitals is a disassortative document-document network created from 48512 vital Wikipedia articles extracted from a complete Wikipedia dump dated April 2022. Nodes correspond to vital Wikipedia articles.
|
244 |
+
|
245 |
+
Node features are binary bag-of-words sparse representations of the articles. Each of the 4000 features in these representations corresponds to the presence or absence of an informative unigram or bigram in the introduction, title or section titles of the article. Edges correspond to the mutual hyperlinks between articles in the corpus of vital articles. Vital articles have been selected by Wikipedia editors and have been categorized by topic. We extracted a 3-level hierarchy of topics and used the 32 intermediate topics within this hierarchy as labels assigned to each node in the graph. Each node was assigned a single label. The table 2 shows a partial view of the topic hierarchy, focusing on the 32 intermediate categories we used in this paper6.
|
246 |
+
|
247 |
+
We relied on a dump of the English Wikipedia because it provides a frozen view of the English Wikipedia and because it contains the text of the articles. Using the Wikidata knowledge graph instead would have lead to a similar dataset but with a different processing.
|
248 |
+
|
249 |
+
We conducted feature extraction on the abstracts, titles and headers for the vital articles. To preprocess this data, we ignored the stop words using nltk.corpus and applied stemming using Snowballstemmer from nltk. Next, we extracted the most frequent unigrams and bigrams from the abstracts and headers with a frequency greater than 1 × 10−3 and from the titles with a frequency greater than 1 × 10−4. Finally, we retained the top 4000 features that were most predictive of the labels in the chi-squared sense.
|
250 |
+
|
251 |
+
6The top level of the hierarchy comprises 11 coarse topics, the middle level 32 topics and the finest level 251 topics.
|
252 |
+
|
253 |
+
| Class name | #articles |
|
254 |
+
|--------------------------------------------------------------|-------------|
|
255 |
+
| Arts 01-Arts | 3310 |
|
256 |
+
| Biological and health sciences 02-Animals | 2396 |
|
257 |
+
| 03-Biology | 886 |
|
258 |
+
| 04-Health | 791 |
|
259 |
+
| 05-Plants | 608 |
|
260 |
+
| Everyday life 06-Everyday life | 1191 |
|
261 |
+
| 07-Sports, games and recreation | 1231 |
|
262 |
+
| Geography 08-Cities | 2030 |
|
263 |
+
| 09-Countries | 1386 |
|
264 |
+
| 10-Physical | 1902 |
|
265 |
+
| History 11-History | 2979 |
|
266 |
+
| Mathematics 12-Mathematics | 1126 |
|
267 |
+
| People 13-Artists, musicians, and composers | 2310 |
|
268 |
+
| 14-Entertainers, directors, producers, and screenwriters | 2342 |
|
269 |
+
| 15-Military personnel, revolutionaries, and activists | 1012 |
|
270 |
+
| 16-Miscellaneous | 1186 |
|
271 |
+
| 17-Philosophers, historians, political and social scientists | 1335 |
|
272 |
+
| 18-Politicians and leaders | 2452 |
|
273 |
+
| 19-Religious figures | 500 |
|
274 |
+
| 20-Scientists, inventors, and mathematicians | 1108 |
|
275 |
+
| 21-Sports figures | 1210 |
|
276 |
+
| 22-Writers and journalists | 2120 |
|
277 |
+
| Philosophy and religion 23-Philosophy and religion | 1408 |
|
278 |
+
| Physical sciences 24-Astronomy | 886 |
|
279 |
+
| 25-Basics and measurement | 360 |
|
280 |
+
| 26-Chemistry | 1207 |
|
281 |
+
| 27-Earth science | 849 |
|
282 |
+
| 28-Physics | 988 |
|
283 |
+
| Society and social sciences 29-Culture | 2075 |
|
284 |
+
| 30-Politic and economic | 1825 |
|
285 |
+
| 31-Social studies | 355 |
|
286 |
+
| Technology 32-Technology | 3148 |
|
287 |
+
|
288 |
+
Table 2: The 32 labels of the nodes of the *WikiVitals* dataset classified by topics of higher granularity
|
289 |
+
|
290 |
+
## 4.3 Hyperparameters, Training, And Grid Search
|
291 |
+
|
292 |
+
Hyperparameters and search space: Grid search during model selection was performed over the following search space Γ:
|
293 |
+
- hidden dimension: [8, 16, 32, 64]
|
294 |
+
- input dropout: [0.2, 0.4, 0.6, 0.8]
|
295 |
+
- dropout: [0.2, 0.4, 0.6, 0.8] - learning rate: [1e-1, 5e-2, 1e-2, 5e-3, 1e-3, 5e-4, 1e-4] - L
|
296 |
+
2-regularization strength: [1e-1, 5e-2, 1e-2, 5e-3, 1e-3, 5e-4, 1e-4, 5e-5, 1e-5]
|
297 |
+
- ϵ (only for FAGCN): [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
|
298 |
+
For *WikiVitals*, we use a reduced search space, hidden dimension was set to 64 and L
|
299 |
+
2-regularization strength to 1e-5, learning rate was in [1e-1, 5e-2], and ϵ in [0.7, 0.8, 0.9]. Training procedures for GNN models: For all GNN model training:
|
300 |
+
- We train for a maximum of 1000 epochs. - We use early stopping, patience is set to 200. - There is no learning rate decay. - An L
|
301 |
+
2-regularization is applied on all layers.
|
302 |
+
|
303 |
+
- All model parameters (convolutional kernel coefficients for FAGCN, weight matrices for all models)
|
304 |
+
are optimized simultaneously.
|
305 |
+
|
306 |
+
- Once training has stopped, we reset the state of model parameters to the step with the lowest validation loss.
|
307 |
+
|
308 |
+
For MLP and GCN, we use early stopping which stops the optimization if the validation loss does not decrease for 200 epochs. For FAGCN, we to stop the optimization if the validation loss and the validation accuracy do not decrease for 200 epoches. For GMNN training, we train models for 100 epoches and perform 10 EM-loops.
|
309 |
+
|
310 |
+
Evolutionary grid search: Our evolutionary algorithm maintains a randomly initialized population of 100 configurations of hyperparameters over generations. At each generation we retain between 2 and 50 configurations whose validation accuracy exceeds the population average for the next generation. New configurations are generated via a 2-pivot crossover, two configurations that have a better evaluation being more likely to be selected. A mutation step assigns a new value to a configuration hyperparameter with a probability 0.05 to promote exploration. Only configurations never seen before are added to complete the population at each generation. The number of generations is set to 10. Beyond this value the evaluation of the best configurations in the population does not seem to increase significantly.
|
311 |
+
|
312 |
+
## 5 Results
|
313 |
+
|
314 |
+
Quantitative results for the node classification task applied to our *WikiVitals* dataset and to the classical Cora, Citeseer and Pubmed datasets are presented in tables 3 and 4. To account for the unfortunate possibility that the EM phases could perhaps decrease the accuracy after the initialization phase we retain the best accuracy among the EM phases only. We thus compare the average accuracies over the k splits before and after the EM phases. More precisely, we perform a relational t-test between those paired means where the alternate hypothesis is that the accuracy after the EM phase is higher than before. Notation for significance in tables 3 and 4 using p-value are: *** if p < 0.001, ** if p < 0.01, * if p < 0.05.
|
315 |
+
|
316 |
+
Leveraging the graph structure with GNNs: These results confirm that taking into account the underlying graph structure provides a significant performance gain for the node classification task for all datasets, regardless of whether the train set is dense or sparse. For classical datasets, the GCN and FAGCN models outperform the use of an MLP which only takes into account node features disregarding the graph structure. For *WikiVitals*, which is a disassortative dataset, the FAGCN model performs best. Actually in this case, when using dense train sets for training, a simple MLP performs better than the basic GCN (see table 4). We thus confirm the importance to choose an architecture adapted to the level of assortativity of the graph. Leveraging label correlations with GMNN: Referring to tables 3 and 4, a general observation is that using GMNN for *WikiVitals*, Cora, Citeseer, Pubmed leads to the best average performance, whether the train sets are sparse or dense. For dense train sets the improvement provided by GMNN is either small, but nevertheless significant, or it is insignificant. Practically, when a large proportion of the dataset is available for training a model GMNN
|
317 |
+
could be worth a try. Yet this small improvement should be balanced against the high computation cost incurred.
|
318 |
+
|
319 |
+
| | Cora | Citeseer | Pubmed | Wikivitals | | | | |
|
320 |
+
|---------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------|--------------|-----------|------------|-----------|------------|
|
321 |
+
| | balanced | stratified | balanced | stratified | balanced | stratified | balanced | stratified |
|
322 |
+
| | train set | train set | train set | train set | train set | train set | train set | train set |
|
323 |
+
| MLP | 58.54 (3.98) 58.32 (2.17) 59.84 (3.54) 58.35 (2.48) 71.23 (2.85) 70.39 (1.70) 68.60 (0.92) 69.35 (1.10) | | | | | | | |
|
324 |
+
| GNN (base) | 80.78 (2.58) 81.31 (2.16) 69.05 (3.66) 70.94 (2.16) 80.20 (1.88) 80.50 (2.38) 70.64 (0.85) 72.68 (1.17) | | | | | | | |
|
325 |
+
| + GMNN ( pϕ qθ best | 81.14 (3.19) 81.56 (2.38) 67.04 (7.47) 70.82 (2.54) 81.26 (1.34) 81.15 (2.30) 74.72 (1.19) 74.64 (1.38) 80.76 (3.74) 81.56 (2.30) 69.34 (3.96) 71.52 (2.19) 81.55 (1.43) 81.60 (2.53) 74.77 (1.18) 74.69 (1.37) 81.67 (3.00) 81.91 (2.21) 69.61 (3.96) 71.62 (2.20) 81.67 (1.32) 81.70 (2.45) 74.80 (1.18) 74.73 (1.36) | | | | | | | |
|
326 |
+
| Significance | ** | * | * | * | ** | *** | *** | *** |
|
327 |
+
|
328 |
+
| | Cora | Citeseer | Pubmed | WikiVitals |
|
329 |
+
|--------------|--------------|--------------|--------------|--------------|
|
330 |
+
| MLP | 78.49 (2.39) | 75.02 (2.15) | 88.68 (0.86) | 86.55 (0.42) |
|
331 |
+
| GCN | 88.84 (2.39) | 77.24 (1.73) | 89.20 (0.86) | 72.74 (0.61) |
|
332 |
+
| + GMNN | 89.26 (1.91) | 77.43 (1.70) | 89.18 (0.84) | 74.19 (0.42) |
|
333 |
+
| Significance | * | | *** | |
|
334 |
+
| FAGCN | 88.87 (1.99) | 78.27 (3.53) | 90.23 (0.90) | 87.84 (0.32) |
|
335 |
+
| + GMNN | 89.08 (1.76) | 78.32 (3.64) | 90.34 (0.88) | 87.92 (0.31) |
|
336 |
+
| Significance | * | | *** | *** |
|
337 |
+
|
338 |
+
Table 3: Test accuracy reported in % using sparse train sets. Best results are highlighted. The base GNN is GCN for Cora, Citeseer and Pubmed, it is FAGCN for *WikiVitals*.
|
339 |
+
|
340 |
+
Table 4: Test accuracy reported in % using dense train sets. Best results are highlighted.
|
341 |
+
|
342 |
+
For sparse train sets GMNN brings a more obvious improvement to the accuracy. This is true for all the datasets that were analyzed. More precisely, this improvement is significant when comparing GMNN with GCN on classical datasets and when comparing it with FAGCN on *WikiVitals*.
|
343 |
+
|
344 |
+
Considering Table 3 we notice that the GMNN accuracies for either the balanced or the stratified sparse train sets of Cora, Pubmed and *WikiVitals* are almost equal. Thus the EM iterations in GMNN seem to converge to the optimal accuracy provided that the model already performed well enough on the baseline. The poor improvement observed on Citeseer for balanced train sets will be discussed below in this section.
|
345 |
+
|
346 |
+
The very significant improvement brought by GMNN for *WikiVitals* when using a sparse train set may be interpreted as follows. In such a situation the information supplied by the correlations between labels seems particularly useful to compensate for the small number of nodes available for training. On the other hand, when the train set is dense, the information for making accurate predictions is supplied by the features of a large number of nodes. The cost of a fair evaluation: The fair evaluation method is a computationally expensive method, especially in the model selection phase. Model selection and model assessment for Wikivitals, which is the largest dataset we considered, required roughly 30 GPU hours. We limit this cost using three different techniques. First, when training the model on *WikiVitals* we cannot afford exploring the whole hyperparameters space Γ that was defined for Cora, Citeseer and Pubmed. We thus restrict the exploration to a subset instead, as described in 4.3. A second approach is to to use an evolutionary grid search algorithm which limits the exploration to roughly 2% − 15% of Γ depending on the model that was described in 4.3.
|
347 |
+
|
348 |
+
At last, we can chose an parsimonious strategy for selecting the hyperparameters β
|
349 |
+
(j) once α
|
350 |
+
(j) has been determined, which we did by setting β
|
351 |
+
(j) = α
|
352 |
+
(j).
|
353 |
+
|
354 |
+
The limits of a fair evaluation: A possible issue with the fair evaluation could occur when the validation sets D
|
355 |
+
(j)
|
356 |
+
valid are too small. In such situations the selected hyperparameters γ
|
357 |
+
(j) are at higher risk to be sub-optimal. To address this we could obviously adopt the same strategy that was used in the evaluation phase, namely randomizing over several initializations of the model parameters. We think this is the origin of the poor performance observed on Citeseer with the balanced train set in table 3.
|
358 |
+
|
359 |
+
## 6 Conclusion And Perspectives
|
360 |
+
|
361 |
+
This paper introduces a new large and disassortative document-document graph dataset named *WikiVitals* and adapts a fair comparison method of GNNs to GMNN to evaluate the contribution of three distinct sources of information for a semi-supervised node classification task: the node features, the underlying graph structure and the label correlations. Taking into account label correlations via a GMNN model was our main focus. We confirmed that these correlations indeed provide a significant improvement for the classification accuracy when the nodes available during training are sparse over the entire graph. The results were observed for both *WikiVitals* and classical benchmark datasets Cora, Citeseer and Pubmed. This makes us confident that this conclusion holds for a practical use of GMNN.
|
362 |
+
|
363 |
+
In future work we intend to leverage the hierarchical categorization that comes with the *WikiVitals* dataset to improve classification accuracy. The question of how exactly to properly define correlations for such hierarchical labels is much more subtle. Another interesting research direction would be handling the multilabel case. This requires both defining an appropriate notion of multi-label correlation and adapting the GMNN model accordingly. This would for instance allow us to deal with ambiguous situations in which some documents are assigned several labels, which often occurs in practice.
|
364 |
+
|
365 |
+
## References
|
366 |
+
|
367 |
+
Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. *Journal of statistical mechanics: theory and experiment*, 2008(10):P10008, 2008.
|
368 |
+
|
369 |
+
Deyu Bo, Xiao Wang, Chuan Shi, and Huawei Shen. Beyond low-frequency information in graph convolutional networks. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp.
|
370 |
+
|
371 |
+
3950–3957, 2021.
|
372 |
+
|
373 |
+
Karsten M Borgwardt, Cheng Soon Ong, Stefan Schönauer, SVN Vishwanathan, Alex J Smola, and HansPeter Kriegel. Protein function prediction via graph kernels. *Bioinformatics*, 21(suppl_1):i47–i56, 2005.
|
374 |
+
|
375 |
+
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. *Advances in neural information processing systems*, 29, 2016.
|
376 |
+
|
377 |
+
Paul D Dobson and Andrew J Doig. Distinguishing enzyme structures from non-enzymes without alignments.
|
378 |
+
|
379 |
+
Journal of molecular biology, 330(4):771–783, 2003.
|
380 |
+
|
381 |
+
Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. A fair comparison of graph neural networks for graph classification. In *ICLR*, 2020.
|
382 |
+
|
383 |
+
Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. *NeurIPS*,
|
384 |
+
30, 2017.
|
385 |
+
|
386 |
+
Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, and Austin R Benson. Combining label propagation and simple models out-performs graph neural networks. In *ICLR*, 2021.
|
387 |
+
|
388 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR*, 2015. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. 2017.
|
389 |
+
|
390 |
+
ArXiv abs/1609.02907, 2017.
|
391 |
+
|
392 |
+
Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Combining neural networks with personalized pagerank for classification on graphs. In *ICLR*, 2018.
|
393 |
+
|
394 |
+
Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5115–5124, 2017.
|
395 |
+
|
396 |
+
Mark EJ Newman. Mixing patterns in networks. *Physical review E*, 67(2):026126, 2003.
|
397 |
+
|
398 |
+
Mark EJ Newman. A measure of betweenness centrality based on random walks. *Social networks*, 27(1):
|
399 |
+
39–54, 2005.
|
400 |
+
|
401 |
+
Meng Qu, Yoshua Bengio, and Jian Tang. Gmnn: Graph markov neural networks. In *ICML*, pp. 5241–5250.
|
402 |
+
|
403 |
+
PMLR 97, 2019.
|
404 |
+
|
405 |
+
Benedek Rozemberczki, Carl Allen, and Rik Sarkar. Multi-Scale attributed node embedding. Journal of Complex Networks, 9(2), 2021. ISSN 2051-1329. doi: 10.1093/comnet/cnab014. URL https://doi.org/
|
406 |
+
10.1093/comnet/cnab014. cnab014.
|
407 |
+
|
408 |
+
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. *AI magazine*, 29(3):93–93, 2008.
|
409 |
+
|
410 |
+
Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of graph neural network evaluation. In Proceedings of the Relational Representation Learning Workshop (R2L
|
411 |
+
2018), NeurIPS 2018, 2018.
|
412 |
+
|
413 |
+
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio.
|
414 |
+
|
415 |
+
Graph attention networks. In *ICLR*, 2018.
|
416 |
+
|
417 |
+
Tiphaine Viard, Thomas McLachlan, Hamidreza Ghader, and Satoshi Sekine. Classifying wikipedia in a fine-grained hierarchy: what graphs can contribute. *CoRR*, abs/2001.07558, 2020. URL https://arxiv.
|
418 |
+
|
419 |
+
org/abs/2001.07558.
|
420 |
+
|
421 |
+
Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In *ICML*, pp. 40–48. PMLR 48, 2016.
|
422 |
+
|
423 |
+
Steven R Young, Derek C Rose, Thomas P Karnowski, Seung-Hwan Lim, and Robert M Patton. Optimizing deep learning hyper-parameters through an evolutionary algorithm. In MLHPC '15: Proceedings of the workshop on machine learning in high-performance computing environments, pp. 1–5, 2015.
|
424 |
+
|
425 |
+
Dengyong Zhou, Olivier Bousquet, Thomas Lal, Jason Weston, and Bernhard Schölkopf. Learning with local and global consistency. *NeurIPS 2003*, 16, 2003.
|
426 |
+
|
427 |
+
Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. *NeurIPS 2020*, 33, 2020.
|
428 |
+
|
429 |
+
Jiong Zhu, Ryan A Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K Ahmed, and Danai Koutra.
|
430 |
+
|
431 |
+
Graph neural networks with heterophily. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
|
432 |
+
volume 35, pp. 11168–11176, 2021.
|
433 |
+
|
434 |
+
Xiaojin Jerry Zhu. Semi-supervised learning literature survey. Technical report, University of WisconsinMadison, Department of Computer Sciences, 2005.
|
4n9itvGILv/4n9itvGILv_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 12,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 12,
|
14 |
+
"code": 0,
|
15 |
+
"table": 4,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 10,
|
18 |
+
"unsuccessful_ocr": 3,
|
19 |
+
"equations": 13
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|