RedTachyon commited on
Commit
20c7dca
1 Parent(s): 0f38a89

Upload folder using huggingface_hub

Browse files
8cYcR23WUo/10_image_0.png ADDED

Git LFS Details

  • SHA256: 09d14ff6571201c3a8d19d84c1fcb51291a5e4fe7b6af11692215d54eab5c631
  • Pointer size: 130 Bytes
  • Size of remote file: 21.7 kB
8cYcR23WUo/10_image_1.png ADDED

Git LFS Details

  • SHA256: e5751fcb4ebc99a7bf328892fe5dcd09dcf417fea417f2690bda34b57df738d7
  • Pointer size: 130 Bytes
  • Size of remote file: 90.5 kB
8cYcR23WUo/19_image_0.png ADDED

Git LFS Details

  • SHA256: 48b8b3396b441134b9255937353de350bd3744422ffd3ba5fab79026054f6d39
  • Pointer size: 130 Bytes
  • Size of remote file: 70.1 kB
8cYcR23WUo/20_image_0.png ADDED

Git LFS Details

  • SHA256: 9aec4b136585463313381543a4a8da422a9b4009815458ac9d51bcaa55ff7467
  • Pointer size: 130 Bytes
  • Size of remote file: 54.8 kB
8cYcR23WUo/20_image_1.png ADDED

Git LFS Details

  • SHA256: 6458f70d90e31a5c844ca6678209ee36a62774ad1860168b04fbbed1e43c519e
  • Pointer size: 130 Bytes
  • Size of remote file: 64.8 kB
8cYcR23WUo/8_image_0.png ADDED

Git LFS Details

  • SHA256: 63e26080737ba59ba57c33c441e6ab584731a46d9f0e5bbb938a7e6727e59ac7
  • Pointer size: 129 Bytes
  • Size of remote file: 1.67 kB
8cYcR23WUo/8_image_1.png ADDED

Git LFS Details

  • SHA256: 21cb40db0d2edb07d8de4e10d1c3a8854dbacdb35883adf22cde7db02ce2b6b6
  • Pointer size: 127 Bytes
  • Size of remote file: 87 Bytes
8cYcR23WUo/8_image_10.png ADDED

Git LFS Details

  • SHA256: e670024262292f0ead81abfe3387e74f8a5529dad0ec0f270299614d9a47799e
  • Pointer size: 128 Bytes
  • Size of remote file: 721 Bytes
8cYcR23WUo/8_image_11.png ADDED

Git LFS Details

  • SHA256: 6574c602a0d437ff6a79bc1c762e6235824ea5ba1143cdd2ef1f79f50bd18531
  • Pointer size: 130 Bytes
  • Size of remote file: 32.8 kB
8cYcR23WUo/8_image_12.png ADDED

Git LFS Details

  • SHA256: e3d4feb8e2205f68f0e627c61c191f14f7732741465e4e208f83c3ee4b94d45b
  • Pointer size: 129 Bytes
  • Size of remote file: 2.34 kB
8cYcR23WUo/8_image_13.png ADDED

Git LFS Details

  • SHA256: d5a2eab6cf1368ee01609fa85d505c32852fb9803f049b9d5ddd2d0b91ad08d9
  • Pointer size: 128 Bytes
  • Size of remote file: 381 Bytes
8cYcR23WUo/8_image_2.png ADDED

Git LFS Details

  • SHA256: 569ac31edac5669cf910ca5994d1f49191658ba146bb54e207d3ad58fb9ed139
  • Pointer size: 129 Bytes
  • Size of remote file: 2.48 kB
8cYcR23WUo/8_image_3.png ADDED

Git LFS Details

  • SHA256: 0b2109e2e48e337a640df3c343476b38216e3b45af0b071887e8c1537a188e25
  • Pointer size: 129 Bytes
  • Size of remote file: 1.8 kB
8cYcR23WUo/8_image_4.png ADDED

Git LFS Details

  • SHA256: 0289902cbed69f524f0fc3272a0e0b9fa20d474a8801547cef29c8a21b7eeb8a
  • Pointer size: 128 Bytes
  • Size of remote file: 713 Bytes
8cYcR23WUo/8_image_5.png ADDED

Git LFS Details

  • SHA256: 7317a38b0ea3e0d8c2e434369a0c7a5bfae6154f11535bc3b803060053ead0ca
  • Pointer size: 128 Bytes
  • Size of remote file: 248 Bytes
8cYcR23WUo/8_image_6.png ADDED

Git LFS Details

  • SHA256: 5455f441dff6e9db357538fdfa83a68dce7579a2546dc4008a76d14034d8f3a9
  • Pointer size: 128 Bytes
  • Size of remote file: 283 Bytes
8cYcR23WUo/8_image_7.png ADDED

Git LFS Details

  • SHA256: dddf5a3a0d83b5ada18a3b10f9767d6b0fbf8e94157c58e3634d6cbf8d2e5e71
  • Pointer size: 128 Bytes
  • Size of remote file: 419 Bytes
8cYcR23WUo/8_image_8.png ADDED

Git LFS Details

  • SHA256: 7c2924b68da254c3190f9476003c311e0275cb73e3eb9268d3fdf648771638dc
  • Pointer size: 129 Bytes
  • Size of remote file: 1.4 kB
8cYcR23WUo/8_image_9.png ADDED

Git LFS Details

  • SHA256: d50ea5aa727d88d41b6383747d31d70c9fb6929f216be54d93cf76e3f74a9240
  • Pointer size: 128 Bytes
  • Size of remote file: 385 Bytes
8cYcR23WUo/8cYcR23WUo.md ADDED
@@ -0,0 +1,566 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # [Re] Gnninterpreter: A Probabilistic Generative Model-Level Explanation For Graph Neural Networks
2
+
3
+ Ana-Maria Vasilcoiu *ana-maria.vasilcoiu@student.uva.nl* University of Amsterdam Thijs Stessen thijs.stessen@student.uva.nl University of Amsterdam Thies Kersten thies.kersten@student.uva.nl University of Amsterdam Batu Helvacioğlu batu.helvacioglu@student.uva.nl University of Amsterdam Reviewed on OpenReview: *https: // openreview. net/ forum? id= 8cYcR23WUo*
4
+
5
+ ## Abstract
6
+
7
+ Graph Neural Networks have recently gained recognition for their performance on graph machine learning tasks. The increasing attention on these models' trustworthiness and decision-making mechanisms has instilled interest in the exploration of explainability techniques, including the model proposed in "GNNInterpreter: A probabilistic generative modellevel explanation for Graph Neural Networks." (Wang & Shen (2022)). This work aims to reproduce the findings of the original paper, by investigation the main claims made by its authors, namely that GNNInterpreter (i) generates faithful and realistic explanations without requiring domain-specific knowledge, (ii) has the ability to work with various node and edge features, (iii) produces explanations that are representative for the target class and
8
+ (iv) has a much lower training time compared to XGNN, the current state-of-the-art modellevel GNN explanation technique. To reproduce the results, we make use of the open-source implementation and we test the interpreter on the same datasets and GNN models as in the original paper. We conduct an enhanced quantitative and qualitative evaluation, and additionally we extend the original experiments to include another real-world dataset with larger graphs. Our results show that the first claim can only be verified for the dataset with larger graphs. We reject the first claim for all other datasets, due to significant seed variation and training instability, as well as extensive need for hyperparameter tuning and domain knowledge. Furthermore, we partially validate the second claim by testing on datasets with different node and edge features, but we reject the third claim due to GNNInterpreter's failure to outperform XGNN in producing dataset aligned explanations. Lastly, we are able to confirm the last claim.
9
+
10
+ ## 1 Introduction
11
+
12
+ Graph Neural Networks (GNNs) have recently surfaced as powerful tools for modeling graph-structured data, demonstrating cutting-edge performance in various applications, such as graph classification (Xu et al.
13
+
14
+ (2018), Zhang et al. (2018)), node classification (Veličković et al. (2017)) and link prediction (Zhang &
15
+ Chen (2018)). However, as these models gain prominence, the scrutiny on their trustworthiness and their decision-making mechanisms intensifies, especially in high-stakes domains such as chemistry or biomedicine where inaccurate predictions can have substantial consequences.
16
+
17
+ This has instilled a lot of interest in the exploration of explainability techniques for graph models. Compared to models that work on text or image data, there are considerably more challenging obstacles that need to be addressed (Yuan et al. (2022)): discrete adjacency matrices cannot be optimized via gradient-based methods
18
+ (Duval & Malliaros (2021)), domain knowledge is often necessary and graph data is heterogeneous and highly complex. Despite these challenges, two types of graph explainability methods have emerged, each with several potent techniques: instance-level explanations (Luo et al. (2020a), Ying et al. (2019b), Vu &
19
+ Thai (2020) and model-level explanations (Yuan et al. (2020)).
20
+
21
+ Model-level explanations have been shown to possess considerable advantages over their counterparts, with the XGNN (Yuan et al. (2020)) being the current state-of-the-art such model. A more recent work however, "GNNInterpreter: A probabilistic generative model-level explanation for Graph Neural Networks" (Wang & Shen (2022)) proposes a novel approach that claims to be more general, flexible and computationally efficient. In this work, we will analyze the findings presented in this paper and verify its claims, as well as present additional experiments that provide further evidence to our analysis. The code for running the reproduced experiments as well as our extensions can be found at https://anonymous.4open.science/r/Reproduction_GNNInterpreter/README.md.
22
+
23
+ ## 2 Scope Of Reproducibility
24
+
25
+ This study aims to examine and validate the results demonstrated by Wang & Shen (2022) through reproduction of their experiments. Our main goal is to verify the claims of the original paper, which can be summarized as follows:
26
+ - Claim 1: The explanations generated by GNNInterpreter are faithful and realistic. Additionally, GNNInterpreter doesn't require domain-specific knowledge to achieve that.
27
+
28
+ - Claim 2: GNNInterpreter is a general approach that performs well with different types of node and edge features.
29
+
30
+ - Claim 3: The explanations generated by GNNInterpreter are more representative regarding the target class compared to XGNN.
31
+
32
+ - Claim 4: The time complexity for training GNNInterpreter is much lower than for XGNN.
33
+
34
+ Our contributions in regards to verifying these claims and additional findings are as follows:
35
+ - Our further analysis on the training dynamics of GNNInterpreter demonstrates that the method performs best on large graphs, but experiences training instability when working with small graphs.
36
+
37
+ - We investigate the impact of graph size on how faithful and realistic the explanations generated by GNNInterpreter tend to be. We find evidence that larger graphs tend to produce more faithful and realistic results.
38
+
39
+ - We find evidence that the GNNInterpreter requires hyperparameter tuning with domain-specific knowledge or searching random initializations to generate faithful and realistic explanations when used with datasets that have small graphs.
40
+
41
+ - We show evidence that GNNInterpreter can be used as a general approach with different types of node and edge features.
42
+
43
+ - Based on our qualitative analysis, we show that the explanations generated by GNNInterpreter are generally on par with XGNN and are outperformed by GNNExplainer for certain datasets.
44
+
45
+ - We show empirically that the time complexity of GNNInterpreter is much lower than XGNN, but that in practice the time spent for hyperparameter tuning or random seed searching should be accounted for.
46
+
47
+ ## 3 Background
48
+
49
+ Graph Neural Networks (GNNs). GNNs have demonstrated remarkable progress in recent years, addressing challenges associated with diverse types of graph-based data. Despite the plethora of proposed models (Kipf & Welling (2017), Gilmer et al. (2017), Veličković et al. (2017)), they often converge on a shared concept of message passing. Due to their success and ubiquity in various applications, as well as due to their increased complexity, the need for interpretability and explainability has become more pronounced than ever. As such, numerous post-hoc techniques that aim to explain the inner workings of GNN models have emerged, with two main categories, namely instance-level and model-level explanations.
50
+
51
+ Instance-Level Explanation of GNNs. Explaining GNNs at an instance level has recently gained momentum, with a goal of explaining particular predictions for specific inputs one at a time. With some exceptions, GNN instance explanation methods can broadly be classified into six categories, including gradient-based
52
+ (Baldassarre & Azizpour (2019)), perturbation-based(Yuan et al. (2021), Ying et al. (2019a), Luo et al.
53
+
54
+ (2020b)), decomposition(Schwarzenberg et al. (2019)), surrogate(Duval & Malliaros (2021)), generationbased(Shan et al. (2021), Lin et al. (2021)) and counterfactual-based methods. One noteworthy example in this category of models is the GNNExplainer (Ying et al. (2019a)) which works by identifying the most important subgraph structure and node features in that subgraph that contribute to the GNN's prediction.
55
+
56
+ Model-Level Explanation of GNNs. In contrast with local explanations, model-level explanations can give insight into how a black box machine learning model makes decisions as a whole, irrespective of any particular input. For tabular data, this can be achieved using techniques such as partial dependence plots
57
+ (Friedman (2001), Hooker (2007),Apley & Zhu (2020)) or permutation feature importance (Strobl et al.
58
+
59
+ (2008), Strobl et al. (2007), Apley & Zhu (2020)). However, these techniques are not applicable to GNNs as the intricate nature of graph configurations doesn't allow for straightforward modifications. For explaining graph classification, one approach that can deal with the variability of graph data is the generation of explanation graphs, namely prototypical examples for a given class. This allows for the opportunity of analyzing what graph patterns or sub-graphs patterns lead to certain predictions. In turn, this gives highlevel insights about what the GNN model is focusing on and a better understanding of whether the model works as expected and otherwise how it can be adjusted.
60
+
61
+ GNNInterpreter, XGNN and GCExplainer. GNNInterpreter, the focus of the original paper that is being reproduced in this study and one approach to model-level explanation, achieves its purpose by training a generative model, while relaxing the discrete nature of edges. Further details about this can be found in Section 4.3. Another such model and the current state-of-art approach for graph explainability is XGNN
62
+ Yuan et al. (2020), which in contrast formulates the graph generation problem as a reinforcement learning task. In this case, the generator predicts how to add an edge into the current graph and it is trained to generate graphs that maximize the class score of the target class by using a policy gradient method based on information from the trained GNN. Ensuring the validity and intelligibility of explanation graphs is done by incorporating graph rules, which may include simple constraints such as enforcing a single edge between any two nodes or more complex domain specific rules (e.g. chemical valency check in a molecular graphs dataset). Lastly, a noteworthy post-hoc model-level explanation method is GCExplainer (Magister et al. (2021)), which adapts the well-known Automated Concept-based Explanation approach (Ghorbani et al.
63
+
64
+ (2019)) and puts the human in the loop to ensure the relevance and comprehensibility of its explanations.
65
+
66
+ Self-interpretable GNNs. In response to post-hoc methods failing to reveal the original reasoning of GNNs, the pursuit of self-interpretable GNN models, which integrate explainability into their architecture, has also recently gained traction (Kakkad et al. (2023)). Some examples include: (1) ProtGNN (Zhang et al. (2022)) that derives explanations from a case-based reasoning process and generates predictions by comparing inputs to learned prototypes in the latent space and (2) the model introduced by Dai & Wang
67
+ (2021) which explains node classification tasks based on a K-nearest approach. Worth mentioning is that self-interpretability is also not ideal, and it often comes at the cost of model accuracy.
68
+
69
+ ## 4 Gnninterpreter Model
70
+
71
+ GNNInterpreter is a model-agnostic and model-level explanation method aiming to reveal the high-level decision-making process of message passing based GNNs. It works by learning a probabilistic generative graph distribution in order to produce the most discriminative graph pattern that the explained GNN detects when making a prediction. Learning objective. GNNInterpreter achieves its goal by numerically optimizing a novel objective function with two distinct but important parts: (1) maximizing the likelihood of the explanation graph to be predicted as the target class by the GNN model and (2) confining the explanation graphs distribution within domainspecific boundaries. Instead of manually defining domain rules, GNNInterpreter leverages the abstract knowledge learned by the GNN and achieves its second goal by maximizing the similarity between the embedding of the explanation graph and the average embedding of all target class graphs from the training data. Mathematically, this can be formulated as follows:
72
+
73
+ $$\operatorname*{max}_{G}L(G)=\operatorname*{max}_{A,Z,X}L(A,Z,X)=\operatorname*{max}_{A,Z,X}\phi_{c}(A,Z,X)+\mu\cdot\operatorname*{sim}_{\operatorname{cos}}(\psi(A,Z,X),{\bar{\psi}}_{c})$$
74
+
75
+ where L is the objective function; A represents the adjacency matrix, Z the edge feature matrix and X the node feature matrix; ϕc is the scoring function corresponding to the target class c; ψ is the graph embedding function of the explained GNN and ψ¯c is the average graph embedding of c class graphs; simcos denotes the cosine similarity; and lastly, µ is a hyper-parameter describing the weight factor.
76
+
77
+ Graph distribution. To learn a probabilistic graph distribution that will try to maximize the target class, two assumptions are being made. Firstly, the explanation graph is assumed to be a Gilbert random graph
78
+ (Gilbert (1959)). Secondly, the features are independently distributed, allowing for the following factorization of a random graph variable G:
79
+
80
+ $$\left(1\right)$$
81
+ $$P(G)=\prod_{i\in{\bf V}}P(x_{i})\cdot\prod_{(i,j)\in{\bf E}}P(z_{i j})P(a_{i j})$$
82
+ $$\left(2\right)$$
83
+
84
+ where xi represents the node feature for node i, zij the edge feature between *i, j* and aij = 1 if there is en edge (*i, j*). The variables were modeled to be drawn from distributions. aij ∼ Bernoulli(θij ), where θij is the probability of edge (*i, j*) existing. zij ∼ Categorical(qij ) and xi ∼ Categorical(pi) where ∥qij∥1 = ∥pi∥1 = 1.
85
+
86
+ The learning objective can thus be rewritten to maximize it over those random variables:
87
+
88
+ $$\max_{G}L(G)=\max_{\Theta,Q,P}\mathbb{E}_{G\sim P(G)}\left[\phi_{c}(\mathbf{A},\mathbf{Z},\mathbf{X})+\mu\cdot\text{sim}_{\text{con}}(\psi(\mathbf{A},\mathbf{Z},\mathbf{X}),\overline{\psi}_{c})\right]\tag{3}$$
89
+
90
+ Continuous relaxation. To address the discrete nature of graphs and achieve node and edge feature flexibility, GNNInterpreter employs continuous relaxation of the discrete variables to continuous variables which are then specified using the Concrete distribution (Maddison et al. (2016)). The random variables are modeled to be continuous as follows:
91
+
92
+ $$\quad(4)$$
93
+ $$\begin{array}{l l}{{\tilde{x}_{i}\sim\mathrm{Concrete}(\xi_{i j},\tau_{x})}}&{{\mathrm{for~}\tilde{x}_{i}\in\tilde{\mathbf{X}}\mathrm{~and~}\xi_{i}\in\Xi}}\\ {{\tilde{z}_{i j}\sim\mathrm{Concrete}(\eta_{i j},\tau_{z})}}&{{\mathrm{for~}\tilde{z}_{i j}\in\tilde{\mathbf{Z}}\mathrm{~and~}\eta_{i j}\in\mathbf{H}}}\\ {{\tilde{a}_{i j}\sim\mathrm{BinaryConcrete}(\omega_{i j},\tau_{a})}}&{{\mathrm{for~}\tilde{a}_{i j}\in\tilde{\mathbf{A}}\mathrm{~and~}\omega_{i j}\in\Omega}}\end{array}$$
94
+
95
+ where τ ∈ (0, ∞) is a hyperparameter representing the temperature, ωij = log(θij/(1−θij )), ηij = log qij , and ξi = log pi. The reparameterization trick is also applied to make this distribution differentiable, facilitating gradient-based optimization. An independent random variable ϵ ∼ Uniform(0, 1) to adjust the sampling function as follows:
96
+
97
+ An independent random variable $\epsilon$ - Uniform(0,1) to adjust the same: $\left\{\begin{array}{l}\tilde{x}_i\sim\text{Softmax}((\xi_i-\log(-\log\epsilon))/\tau_z)\\ \tilde{z}_{ij}\sim\text{Softmax}((n_{ij}-\log(-\log\epsilon))/\tau_z)\\ \tilde{a}_{ij}\sim\text{sigmoid}((\omega_{ij}-\log\epsilon-\log(1-\epsilon))/\tau_a)\end{array}\right.$ It's possible to draw samples that correspond to the distribution, with $\epsilon$.
98
+ $$\left(5\right)$$
99
+ With these two modifications it's possible to draw samples that correspond to the distribution, without losing differentiability. Finally, the learning objective is approximated using the Monte Carlo method with K samples:
100
+
101
+ $$\max_{\Theta,Q,\tilde{P}}\mathbb{E}_{G\sim P(G)}[L(A,Z,X)]\approx\max_{\tilde{\Omega},\tilde{\Omega},\tilde{Z}}\mathbb{E}_{e\sim U(0,1)}[L(\tilde{A},\tilde{Z},\tilde{X})]\approx\max_{\tilde{\Omega},\tilde{\Omega},\tilde{Z}}\frac{1}{K}\sum_{k=1}^{K}L(\tilde{A},\tilde{Z},\tilde{X}).\tag{6}$$ $\tilde{X}$ represent the continuously relaxed versions of $A,Z,X$ (described above) obtained after an
102
+ where A, e Z, e Xe represent the continuously relaxed versions of *A, Z, X* (described above) obtained after applying the reparameterization trick.
103
+
104
+ Regularization. L1 and L2 regularization are used to avoid gradient saturation and to also encourage model sparsity. Equation 7 shows how both types are applied to the Ω parameter, which describes the categorical distribution for each connection in the adjacency matrix:
105
+
106
+ $$R_{L_{1}}=||\mathbf{\Omega}||_{k}\quad{\mathrm{with~}}k\in\{1,2\}$$
107
+ RL1 = ||Ω||k with k ∈ {1, 2} (7)
108
+ 4 To limit the size of the explanation graphs and to prevent them from growing indefinitely with repeated discriminative patterns, a budget penalty regularization is also used. This is expressed in Equation 8 where the additional parameter B represents the expected maximum number of edges in the explanation graph.
109
+
110
+ $$R_{L_{1}}=\mathrm{softplus}(||\mathrm{sigmoid}(\Omega)||_{1}-B)^{2}$$
111
+ 2(8)
112
+ Lastly, as inspired by the work of Luo et al. (2020a), a connectivity incentive term is also applied to promote correlation, which is done by minimizing the Kullback-Leibler divergence DKL between the probabilities of each pair of edges that share a common node. This can be seen in Equation 9, where Pij represents the Bernoulli distribution parameterized by sigmoid(ωij ).
113
+
114
+ $\left(\mathfrak{S}\right)$.
115
+ $$R_{c}=\sum_{i\in V}\sum_{j,k\in{\mathcal{N}}(i)}\mathrm{D}_{\mathrm{KL}}(P_{i j}||P_{i k})$$
116
+ $$({\mathfrak{g}})$$
117
+ DKL(Pij ||Pik) (9)
118
+
119
+ ## 5 Methodology
120
+
121
+ The GNNInterpreter implementation, alongside with datasets, pre-trained classifiers and experiment notebooks is publicly available1. Consequently, we make use of the authors' code, in 2 different versions, namely the initial and the latest version, and with some minor bugfixes and improvements. We set up an environment that replicates the required versions for all frameworks and packages and run all experiments consistently in this environment. Additionally, we enhance and extend the work of Wang and Shen by conducting additional experiments and presenting further results on the reddit-binary dataset. It's important to note that the GitHub implementation by the authors is not officially endorsed. While labeled 'official' and hosted on the main author's GitHub page, it is not directly referenced in the original paper. However, after extensively reviewing the code, we have found that it closely matches the architecture described in the paper. For that reason, we decided to use this implementation with only minor changes, as mentioned earlier, and addressing ambiguities where the paper lacked clarity (see Section 5.3 and Appendix E). We have made available the revised code we used for all experiments, together with demo notebooks2.
122
+
123
+ ## 5.1 Datasets
124
+
125
+ The experiments conducted in the original paper, both quantitative and qualitative, involve the use of 5 different datasets with the aim of demonstrating the flexibility of GNNInterpreter of working with various feature types.
126
+
127
+ Synthetic Datasets. The authors have created 3 synthetic datasets, namely Cyclicity, Motif and Shape, each with their unique characteristics:
128
+ - The **Shape** dataset consists of 5 classes of graphs: Lollipop, Wheel, Grid, Star and Others. Each class contains graphs with the corresponding shape, while the Others class has graphs with random topology.
129
+
130
+ - Within the **Motif** dataset, graphs are classified based on the presence of specific motifs, including House, House-X, Complete-4, and Complete-5, each corresponding to a distinct class label. Additionally, there is a separate class encompassing graphs without any unique motifs. Nodes in this dataset are characterized by a categorical feature with 5 potential color values.
131
+
132
+ - The graphs in the **Cyclicity** dataset are identified by edge features with 2 potential values: green or red. The classification involves 3 labels: Red-Cyclic, Green-Cyclic and Acyclic graphs. A graph belongs in the red/green cyclic classes only if it contains a cycle formed of exclusively red or green edges. Acyclic graphs or graphs that contain a heterogeneous edge cycle are categorized as Acyclic.
133
+
134
+ For further insights into each synthetic dataset generation procedure, please refer to Appendix F.
135
+
136
+ Real-world Dataset. Additionally, the authors used the real-world MUTAG dataset (Morris et al. (2020)).
137
+
138
+ This dataset contains 188 graphs representing molecules with either the class label mutagenic or non-1https://github.com/yolandalalala/GNNInterpreter 2https://github.com/MeneerTS/GNNinterpreter_Reproduction mutagenic. These graphs have an average of 17.93 nodes and 19.79 edges. Both the nodes and the edges have labels (although edge labels are unused), representing atoms and types of chemical bonds respectively.
139
+
140
+ Important to note is that hydrogen atoms were removed from all graphs . The original paper that collected the dataset (Debnath et al. (1991)) notes that the main factors for mutagenic properties were the hydrophobicity of the molecule, the energy of the lowest unoccupied electron orbital and the presence of three or more fused rings. For our purposes, as the hydrogen atoms have been removed, this translates to the presence of many NO2 bonds and the presence of many fused rings. However, it is essential to acknowledge that determining the mutagenicity of a molecule is a highly challenging task and that these assumptions give a simplified view that is only valid for the purposes of testing out the GNNInterpreter. Additional Dataset. Since the original paper only tested the model on one real-world dataset, we have broaden the scope of our investigation for Claim 1 and 2 by using the Reddit-Binary dataset (Yanardag &
141
+ Vishwanathan (2015)). This dataset comprises 2000 real-world examples of text posts and their corresponding comments, known as threads, from two different types of posts on Reddit. These posts are represented as graphs where nodes correspond to users and edges denote interactions, such as comments on each other's posts or replies to comments. The posts are categorized into 2 classes based on their origin within specific subreddits: the first class includes threads from the *IAmA* and *Askreddit* subreddits, characterized by question-answer interactions, while the second class consists of discussion threads from *TrollXChromosomes* and *atheism*, where multiple users engage in diverse interactions. The distribution of these classes is balanced, although the first class has larger graphs, with 641.24 nodes and 1471.90 edges on average, while the second class only has 217.99 nodes and 519.11 edges on average.
142
+
143
+ ## 5.2 Gnn Architectures
144
+
145
+ The experimental study of the original paper employs different GNN classifiers trained on each of the datasets mentioned above. For MUTAG, Motif and Shape datasets, a GCN model is used (Kipf & Welling (2017)),
146
+ consisting of 3 graph layers of width 64, a global mean pooling and 2 dense layers. For the additional Reddit-Binary dataset, this architecture remained the same but with 5 layers to accommodate the larger graph size. Lastly, the GNN classifier used for the Cyclicity dataset is a deep NNConv model (Gilmer et al.
147
+
148
+ (2017)) with 5 NNConv layers of width 32, and again a global mean pooling layer and 2 dense layers at the end. Both model architectures use LeakyReLU activation.
149
+
150
+ ## 5.3 Hyperparameters
151
+
152
+ To closely replicate the original experiments, we first adopted the same hyperparameter values as in the paper whenever they were explicitly mentioned. This entailed using τ = 0.2 for the Concrete Distribution temperature, a sample size K = 10 for Monte Carlo samplings, an SGD optimizer with learning rate 1 during training, a max node count of 20 and an embedding similarity µ = 10 for MUTAG and µ = 1 for all other datasets. In our experimentation with regularization weights, we discovered that certain values led to empty graph generation errors. As we have noticed a discrepancy between the values mentioned in the paper and the ones used in the code notebooks provided in the original GitHub repository, the next step was to try this other set of values. However, for some datasets these values also led to sub optimal results. Please refer to Appendix A for the actual paper and notebook parameters, and to Appendix B for results using these values. Finally, we extensively tuned these weights separately for each dataset, in order to achieve the highest quantitative results, just as would be done when applying GNNInterpreter to a new problem. Note that adjustments were not necessary for all datasets, as for example, the paper parameters already have close to perfect quantitative results on the MUTAG dataset, while the notebook ones achieved comparable results on the Cyclicity dataset. The final values based on which we later report the results can be found in Appendix C.
153
+
154
+ For the additional Reddit-Binary dataset, hyperparameter tuning had to be done from scratch. Due to the larger size of the graphs, a larger max node count of 50 was required, although it was kept as small as possible to ensure readability of the explanations. The other hyperparameters can be found in Appendix C.
155
+
156
+ ## 5.4 Experimental Setup
157
+
158
+ GNN reproduction. To validate the accuracy of the base GNN models, we tested both the provided model checkpoints and retrained models. First, we loaded the provided checkpoints into the architectures outlined in section 5.2. To test these models, we used the same test-train split as reported in the authors' notebooks. Subsequently, we retrained the models according to the original paper specifications, employing Kaiming parameter initialization and an AdamW optimizer with a 0.01 learning rate and 0.01 weight decay. GNN Interpreter. To verify the first three claims, we measured the performance of the GNNInterpreter both quantitatively and qualitatively like in the original paper. The quantitative analysis involved passing model-generated explanation graphs to the GNN classifiers to obtain class probabilities. The main rationale guiding the original authors' decision for this was that both GNNInterpreter and XGNN are designed to maximize target class scores. As such, a GNNInterpreter model is considered to have good quantitative performance if the explanation graphs yield high probabilities for the correct class. Based on this, the original study sampled 1000 explanation graphs from a single GNNInterpreter and averaged over the resulting GNN class probabilities to measure the quantitative performance. During our reproduction however, we noticed that the performance of a GNNInterpreter depends heavily on the random initialization and thus we decided to train 100 different models for each class with seeds between 0 and 100. We then applied the same 1000 explanation graph quantitative analysis for each model individually and averaged over all models. Furthermore, we quantified the frequency with which the GNN Interpreter could be categorized as a good model, defined by a correct class probability of at least 0.9. In contrast, we defined a bad model as either a model that generated empty graphs or graphs with target class probability less than 0.1. Building upon these definitions, we analyze and report the quantitative analysis for each class in all datasets using the worst and best performing models, as well as by averaging over all 100 models. To test Claim 4, we used the same 100 models to get the average training times per class for GNNInterpreter across all 4 datasets and for XGNN with the MUTAG dataset.
159
+
160
+ For the qualitative analysis, we used explanation graphs that got target class probability of 1.0 whenever possible, and otherwise explanations generated by the best model out of 100 seeds. We then performed visual inspections on these graphs, paying particular attention to properties mentioned in the original paper.
161
+
162
+ We did some further analysis on the training dynamics after encountering instability during training. We tracked the explanation graph size and correct class probability throughout training iterations. We used the Motif, Cyclicity and Reddit-Binary datasets for the analysis. Reddit-Binary was chosen because of its significantly larger graphs. Motif and Cyclicity were chosen because they were two datasets that we encountered the most instability while still being able to get good results occasionally.
163
+
164
+ Additional Dataset. Due to the larger graph and model size of the Reddit-Binary dataset, replicating the same experiments as with the other datasets was computationally infeasible. Therefore, we only trained the base GNN once and averaged GNNInterpreter results over just 10 seeds. In addition, we constructed a random baseline to examine the effect of the substantial difference in graph size between the two classes. Ablation study. The ablation study reported in the original paper aimed to illustrate the importance of the cosine similarity term in the learning objective for generating meaningful explanations. This was done by setting the µ parameter to 0 in Equation 1. To recreate this experiment, we trained a GNNInterpreter on the mutagen class, using the same parameters as reported in the original paper and, once more, averaged over 5 different seeds for a more accurate representation.
165
+
166
+ Verification study. The verification study performed by the original authors aimed to confirm that the GNNInterpreter's explanation graphs were actually explaining the behavior of the GNN classifier. As part of the qualitative analysis on the Motif dataset, rules that represent the discriminative features the GNN
167
+ uses were manually extracted from the explanation graphs. These rules were then used to create 8 new motifs that share common features but are different from the ground truth motif. 5000 base graphs were generated and attached to each of the 8 fake motifs and the ground truth motif. These new data-points were passed through the GNN model. If the extracted rules and explanation graphs reflect the GNN faithfully, it is expected for the GNN to misclassify the 8 new fake motifs as the ground truth motif. This will prove that the explanation graph generated by GNNInterpreter included features that are actually used by the GNN
168
+ to classify the ground truth class.
169
+
170
+ ## 5.5 Computational Requirements
171
+
172
+ All our experiments were run using a single CPU, an AMD Ryzen 5900x. We made an effort to replicate the Python environment as closely as possible to the one specified by the authors. As such, we used Python 3.9.0 and PyTorch 2.0.0. For PyTorch Geometric, we employed version 2.3.0, as indicated in the authors' GitHub, rather than the version specified in the paper. We opted for this approach because we ran all experiments using the code from this source. In addition, we used the libraries torch-scatter and torch-sparse for torch 2.0.0 with cuda version 118, although no GPU is required. This environment can be found in our repository. Prior to use, please ensure to follow the provided installation guide. All models, including XGNN, use 83 watts of power during training. The total training time for averaging over 100 seeds, experimenting on GNNInterpreter, and training the GNNs was 15.26 hours, with a total power consumption of 1.267 kWh.
173
+
174
+ ## 6 Results 6.1 Gnn Accuracies
175
+
176
+ Our analysis on both the pretrained and retrained GNN models revealed that for all datasets except for MUTAG, both approaches consistently yielded lower, yet still relatively comparable, accuracy scores compared to those reported in the paper. Averaged results can be found in Table 1. Of note is that even though the average performance on the MUTAG dataset was significantly lower, some individual models perform about as well as reported in the original paper, reaching 0.944 accuracy in our limited test of 5 seeds (reflected by the high variance). Performance similar to the original paper can thus be reached by finding the right seed.
177
+
178
+ | Dataset | Pre-trained model | Re-trained model | Original paper accuracy |
179
+ |---------------|---------------------|--------------------|---------------------------|
180
+ | MUTAG | 0.8333 | 0.7222 ± 0.1267 | 0.9468 |
181
+ | Cyclicity | 0.9493 | 0.9302 ± 0.0091 | 0.9921 |
182
+ | Motif | 0.9966 | 0.991 ± 0.00235 | 0.9964 |
183
+ | Shape | 0.9812 | 0.9795 ± 0.0016 | 0.9725 |
184
+ | REDDIT-BINARY | - | 0.8650 | - |
185
+
186
+ Table 1: The accuracies of the GNN models. Retraining was done on 5 different seeds, for which the mean and standard deviation are reported. The GNN for the Reddit-Binary dataset was only trained once due to its long training time.
187
+
188
+ ## 6.2 Quantitative And Qualitative Evaluation
189
+
190
+ As explained in section 5.4, we performed both a quantitative and qualitative analysis. Quantitative results, in the form of predicted class probabilities and training times, are presented in Table 2. One explanation graph per class generated from the best interpreter that was found can be seen in Table 3, while more qualitative results are later presented in Appendix D.
191
+
192
+ MUTAG. The quantitative analysis shows that training the GNNInterpreter is almost always successful in terms of reaching high class probabilities. However, the qualitative results do not seem to match with the examples from the dataset and for the mutagen class they also do not match with the result from the original paper. In terms of training times for this dataset, we observe a significant disparity between GNNInterpreter and XGNN. Specifically, the average training time for the former is 0.79, while for the latter, it is 38.83.
193
+
194
+ Cyclicity. The quantitative results show that the model is capable of achieving a perfect predicted probability for all 3 classes in this synthetic dataset, as it can be seen in the column for the Best model in Table 2. The qualitative results further corroborate this, with the explanation graphs containing a single red and green cycle for the first 2 classes respectively, and a cycle with heterogeneous edge features for the Acyclic class. However, it is crucial to emphasize that under different random initialization, the model predominantly predicts correct results only the Red Cyclic class, while for the other 2 classes the likelihood of a good model is only 0.22 and 0.37 respectively.
195
+
196
+ Shape. Quantitative results on the Shape dataset revealed that GNNInterpreter struggled to reliably converge, except for the star class. While this class achieved an average accuracy of 1, other classes did not perform as well. The lollipop class has a low accuracy of 0.22, while the wheel class had 0.84 and the grid class 0.78.
197
+
198
+ | Average of | Best | Worst | Percentage of | Percentage of | Training | | |
199
+ |------------------|-----------------|----------------|-----------------|-----------------|------------|------|--------|
200
+ | all Models | good models | bad models | time (s) | | | | |
201
+ | Model | Model | | | | | | |
202
+ | MUTAG | Mutagen | 0.987 ± 0.100 | - | - | - | - | 38.83 |
203
+ | Nonmutagen | 0.999 ± 0.002 | - | - | - | - | | |
204
+ | (XGNN) MUTAG | | | | | | | |
205
+ | (GNNInterpreter) | Mutagen | 0.999 ± 0.006 | 1.000 ± 0.000 | 0.921 ± 0.254 | 1.00 | 0.00 | 0.79 |
206
+ | Nonmutagen | 0.943 ± 0.068 | 1.000 ± 0.000 | 0.330 ± 0.429 | 0.87 | 0.00 | | |
207
+ | Red Cyclic | 0.926 ± 0.0677 | 1.000 ± 0.000 | 0.000 ± 0.000 | 0.84 | 0.02 | | |
208
+ | Cyclicity | Green Cyclic | 0.665 ± 0.372 | 1.000 ± 0.000 | 0.101 ± 0.290 | 0.22 | 0 | 24.85 |
209
+ | (GNNInterpreter) | Acyclic | 0.525 ± 0.120 | 1.000 ± 0.000 | 0.000 ± 0.000 | 0.37 | 0.40 | |
210
+ | House | 0.787 ± 0.220 | 0.991 ± 0.006 | 0.000 ± 0.000 | 0.41 | 0.08 | | |
211
+ | Motif | 19.17 | | | | | | |
212
+ | House-X | 0.276 ± 0.085 | 0.999 ± 0.009 | 0.000 ± 0.000 | 0.11 | 0.63 | | |
213
+ | (GNNInterpreter) | Complete-4 | 0.077 ± 0.020 | 0.995 ± 0.052 | 0.000 ± 0.000 | 0.06 | 0.91 | |
214
+ | Complete-5 | 0.131 ± 0.034 | 0.997 ± 0.053 | 0.000 ± 0.000 | 0.07 | 0.82 | | |
215
+ | Lollipop | 0.222 ± 0.294 | 0.43 ± 0.374 | 0.096 ± 0.199 | 0.00 | 0.01 | | |
216
+ | Shape | 23.48 | | | | | | |
217
+ | Wheel | 0.84 ± 0.279 | 0.997 ± 0.056 | 0.058 ± 0.231 | 0.45 | 0.02 | | |
218
+ | (GNNInterpreter) | Grid | 0.782 ± 0.327 | .911 ± 0.216 | 0.612 ± 0.408 | 0.02 | 0.00 | |
219
+ | Star | 1.000 ± 0.001 | 1.000 ± 0.000 | 0.987 ± 0.109 | 1.00 | 0.00 | | |
220
+ | Reddit-Binary | Question-Answer | 0.8454 ± 0.019 | 0.89199 | 0.72159 | - | - | 25.774 |
221
+ | Discussion | 0.989 ± 0.000 | 0.9889 | 0.9889 | - | - | | |
222
+ | (GNNInterpreter) | | | | | | | |
223
+
224
+ Table 2: The quantitative results for the 4 original datasets plus and the Reddit-Binary dataset. The metric used per model is the average class probability of 1000 explanation graphs. Reported is the average of the class probabilities for 100 models across different seeds, with the exception of Reddit-Binary where testing was done only using 10 seeds. We also report the probabilities generated by the best and worst interpreter, together with the percentages of good and bad models obtained. The standard deviations represent the standard deviation obtained from each quantitative test, averaged over all models.
225
+
226
+ Dataset
227
+
228
+ ![8_image_0.png](8_image_0.png)
229
+
230
+ ![8_image_1.png](8_image_1.png)
231
+
232
+ ![8_image_2.png](8_image_2.png)
233
+
234
+ ![8_image_3.png](8_image_3.png)
235
+
236
+ ![8_image_4.png](8_image_4.png)
237
+
238
+ ![8_image_6.png](8_image_6.png)
239
+
240
+ ![8_image_8.png](8_image_8.png)
241
+
242
+ [Method] Generated Model-Level Explanation Graphs Mutag
243
+
244
+ [XGNN]
245
+
246
+ Mutagen Non-Mutagen us original example us original example
247
+
248
+ ![8_image_7.png](8_image_7.png)
249
+
250
+ Mutag
251
+
252
+ [GNNInterpreter]
253
+
254
+ Mutagen Non-Mutagen us original example us original example
255
+
256
+ ![8_image_10.png](8_image_10.png)
257
+
258
+ Cyclicity
259
+
260
+ [GNNInterpreter]
261
+
262
+ Red Cyclic Green Cyclic Acyclic us original example us original example us original example
263
+
264
+ ![8_image_11.png](8_image_11.png)
265
+
266
+ ![8_image_9.png](8_image_9.png)
267
+
268
+ Table 3: The qualitative results for the 4 datasets. For easy comparison, 3 figures for each class in all datasets are reported, namely (1) an explanation graph generated while reproducing the experiments, (2) the explanation graph reported in the original paper and (3) an example graph from the dataset, chosen to have the highest class probability that was found for its respective target class.
269
+
270
+ ![8_image_5.png](8_image_5.png)
271
+
272
+ ![8_image_12.png](8_image_12.png)
273
+
274
+ ![8_image_13.png](8_image_13.png)
275
+
276
+ Motif. The quantitative results of the Motif dataset seen in Table 2 show that GNNInterpreter's performance decreases as the motif becomes less unique. Complete-4 and Complete-5 motifs are much more likely to occur in random graphs than House and House-x. Furthermore, Complete-4 and Complete-5 classes had a good model only 6% and 7% of the time. Despite the rarity of good models, we managed to get at least 1 model for each class that achieved target class probability 1. We believe the high variance of our results may stem from the instability during training which we further explored in 6.4.
277
+
278
+ As for qualitative analysis we can see in Table 3 that if a GNNInterpreter with high target class probabilities is used, we can get the motif, or part of the motif in the explanation graphs. Small triangles resembling the roof of the house, x shaped connection of House-x, and many interconnected nodes like complete-5, are key features of the ground truth motifs present in the respective explanation graphs. The explanation graph for Complete-4 actually contains the entire ground truth motif positioned at the top left. We also observed an exact match with the house-x motif which is the first image of house-x row in Appendix Table14. However, this graph achieved a class probability of 0.62 instead of 1.
279
+
280
+ Verification study. In their verification study, the original authors derived a discrete rule from the explanation graphs through qualitative analysis. However, our explanation graphs, despite having a class probability of 1.0, were significantly larger and lacked the same level of interpretability as those in the original paper. This made qualitative rule extraction impractical, thus rendering the replication of this experiment unfeasible.
281
+
282
+ Ablation study. The explanation graphs generated did not change when changing µ to zero for 19 out of 20 of the seeds checked. The logits also stayed exactly the same, except for one case where it changed. In that case the GNNInterpreter did not manage to converge on the correct class and the logit was actually negative with µ = 0 and positive with µ = 10. This means that the impact of changing mu is minimal for this model and dataset, only changing the outcome in some cases.
283
+
284
+ ## 6.3 Results Beyond Original Paper
285
+
286
+ Reddit-Binary. The random graph baseline results indicate that when the maximum node count is set to 50, the Discussion class is consistently selected by default, with an average class probability of 0.9887.
287
+
288
+ Moreover, analysis of the Reddit-binary dataset, as presented in Tables 2 and 3, illustrates that the GNNInterpreter achieves both good quantitative and qualitative results for the Ask-question class. This suggests that the model effectively captures a distinguishing feature of this class, namely the presence of a few experts answering numerous questions. Despite its robust quantitative performance, GNNInterpreter appears to generate nearly random graphs for the Discussion class, providing limited explanation. This outcome is however somewhat expected, given that random graphs perform well for this class, as evident from the baseline.
289
+
290
+ MUTAG. Given the discrepancies observed between the quantitative and qualitative analyses of GNNInterpreter's performance on the MUTAG dataset, particularly the lack of alignment with dataset examples, we sought to investigate whether this issue steams from the inherent limitations of the dataset or within the GNNInterpreter model itself. To address this, we conducted a comparative analysis with GNNExplainer
291
+ (Ying et al. (2019a)), and we report the obtained qualitative results in Table 5. We chose GNNExplainer due to its resemblance to GNNInterpreter in implementing a budget penalty for generating compact explanations, along with the straightforward experimental setup on this dataset. It is evident from these finding that GNNExplainer outperforms GNNInterpreter in terms of faithfulness and realism. Despite imperfections, we note that both graphs depict carbon rings accurately, with the graph for the mutagen class further featuring a NO2 bond.
292
+
293
+ ## 6.4 Analysis Of Training Instability
294
+
295
+ We observed the expected behavior of decreasing graph size and increasing correct class probability only with the Reddit-Binary dataset question-answering class as can be seen from Table 6.
296
+
297
+ Dataset
298
+
299
+ ![10_image_0.png](10_image_0.png)
300
+
301
+ [method]
302
+ Table 5: Qualitative results on MUTAG dataset from GNNExplainer model.
303
+
304
+ ![10_image_1.png](10_image_1.png)
305
+
306
+ Table 6: Graph Size and Correct Class Probabilities vs Iteration graphs of select examples that demonstrate different types of training behavior. Starting from top left: Expected behavior, Never-converging, Convergence, and Non-convergence.
307
+ Depending on the random seed, some initializations resulted in 0 class probability explanation graphs after 2000 iterations such as Motif House-x Seed 0, while other seeds managed to reach the early stopping criteria of at least 0.9 correct class probability like Motif House-x Seed 2, as seen in Table 6. We will refer to getting stuck at 0 probability for the entire training as the never-converging scenario. Reaching early stopping will be referred as the convergence. There were also cases were GNNInterpreter learned a 0 correct class probability explanation graph even though it managed to get larger but higher probability explanation graphs during training. For example, Cyclicity Acyclic plot from Table 6, missed the local minima that would coincide with larger but higher probability explanation graphs. We will call this the non-convergence. These were the 4 main phenomena that we observed during our experiments. Certain classes were predisposed to experience a certain scenario based on their individual properties, yet these scenarios varied depending on the seed.
308
+
309
+ ## 7 Discussion 7.1 Discussion Of The Results
310
+
311
+ This study assessed four claims regarding the effectiveness of GNNInterpreter as an explainability technique for Graph Neural Networks. Firstly, the GNNs used were reproduced to ensure a solid base for further analysis and they were found to match the original paper to a satisfying degree.
312
+
313
+ Regarding the realism and faithfulness of generated explanations, as per Claim 1, we have found that adjusting hyperparameter values yielded mixed quantitative and qualitative results across datasets, with a significant sensitivity to seed variation. While the technique holds promise for generating realistic results, its unreliability poses a significant challenge to achieving those in practice. Using the available code repository, the necessity of tuning numerous hyperparameters across various seeds undermines the usefulness of this technique. For some classes, like Lollipop from the Shape dataset, GNNInterpreter was never able to reliably achieve realistic results even after checking 100 seeds. This indicates its performance is also heavily influenced by the dataset and class. Upon our analysis of training instability, we found the main reason to be the graph size of the dataset. Even though we use relaxation to model the graphs as continuous distributions, the loss behaves discretely when the graph size is small. The class probabilities and logits we get from the GNN classifiers vary drastically since removing even one edge from the graph can drastically change the probability from 1 to 0. For example, Acyclic class from the Cyclicity class has a discrete loss signal from the GNN since removing a single edge can make it Acyclic or adding a single edge can make it cyclic. This problem is not encountered in datasets like Reddit-Binary that have large graphs. Impact of a single edge is much lower on the GNN's decision in Reddit-Binary compared to Motif or Cyclicity datasets.
314
+
315
+ Another reason we think could explain the instability in GNNInterpreter's training procedure is the implementation of the dynamic weighting of the budget penalty, combined with the arbitrary selection of an early stopping criterion. These were design decisions not mentioned in the original paper and during testing we observed them to have a large impact on model performance. Sometimes their presence was the key to achieving convergence, while at other times they lead to non-convergence. Both dynamic budget weight penalty and early stopping depended on the max number of nodes parameter. The weight of the budget penalty was increased if the explanation graph was larger than the max number of nodes and had a correct class probability higher than a certain threshold. If the probability threshold was not reached, the budget penalty weight was decreased to incentivize exploration. The max number of nodes parameter was not mentioned in the original paper, and needs to be finetuned for each dataset due to variations in graph sizes. Our experiments on modifying and removing these all together can be found in Appendix E. A simple way to reduce non-convergence would be to save the best model during training and remove the early stopping.
316
+
317
+ The final type of training instability, is the never-converging scenario. We observed this scenario mostly in classes with highly specific structures such as complete-4 and complete-5 from Motif dataset. If the sampled graphs are not large enough, it is unlikely that 4 or 5 nodes all completely connected will occur. It is very unlikely that GNNInterpreter will explore larger graphs because of the budget penalty. For complicated or highly specific classes, a large graph must be sampled to decrease the dependence on random initialization.
318
+
319
+ Sampling larger graphs slows down GNNInterpreter significantly. Also the size of the sampled graph is introduced as a new parameter not mentioned in the original paper. Determining both the size of the sampled graph and the maximum number of nodes for dynamic weighting necessitates domain knowledge for appropriate selection. A max size of a class parameter could be used to avoid using domain knowledge and ensuring a large enough graph to sample from.
320
+
321
+ As for the faithfulness of explanation graphs, we found that, for the MUTAG dataset, good quantitative results did not lead to good qualitative results. This indicates that while using this technique on a new problem, it is possible that the provided explanations are not representative even after proper fine-tuning.
322
+
323
+ GNNInterpreter yielded faithful qualitative results for Reddit-Binary highlighting the importance of having a large enough graph to ensure the class probability loss is easily optimizable. Overall, using the only publicly available code repository, we found that this method requires domain knowledge for tuning parameters, and is unstable with datasets that have small graphs. This shows that Claim 1 holds for datasets with large graphs where the impact of a single node on the classification outcome is relatively small. Claim 1 does not hold for datasets with smaller graphs. For these datasets, faithful explanations can reliably generated only by extensive parameter tuning with domain knowledge or trying random initializations.
324
+
325
+ Moreover, by being able to train at least some good interpreters on multiple datasets, each with unique types of node features, our results provide evidence in support of Claim 2. While results were not good for the Shape dataset, the inclusion of the Reddit-Binary dataset shows that GNNInterpreter can perform well without node or edge features. The Cyclicity dataset showed good results for datasets with only edge features, while MUTAG and Motif showed promising results for datasets with just node features. This just leaves the case of a dataset with both node and edge features, which could be explored further in future research. However, our results do not indicate that GNNInterpreter would encounter challenges with such a dataset.
326
+
327
+ Despite the limitation of solely experimenting with XGNN on the MUTAG dataset, our results provide sufficient basis for drawing meaningful conclusions about Claim 3 and 4. As evident from our qualitative results, GNNInterpreter did not produce explanation graphs aligned significantly better with the dataset for either the mutagen or the non-mutagen class, compared to XGNN. Furthermore, based on our qualitative analysis, explanations generated by GNNExplainer outperformed GNNInterpreter. Therefore, for the MUTAG dataset our results contradict Claim 3. While we were able to replicate the training times for a single model as reported by the authors, it is worth noting that for most datasets, we had to train on multiple seeds (as many as 20) to obtain a single good interpreter. Particularly for classes with greater complexity, we observed a decrease in the number of good models. As such, we believe that the training times should better reflect the actual amount of time required to achieve a good model either by trying random seeds or tuning parameters and sampling larger graphs for complicated classes. Nevertheless, for a single model, we have found GNNInterpreter to be nearly 38 times faster than XGNN and we thus affirm the validity of Claim 4 on the MUTAG dataset.
328
+
329
+ GNNInterpreter can be improved to be more reliable, stable, and require less domain knowledge at the cost of increased training times. As mentioned in the discussion, using max class size for sampled graph size can eliminate the never-converging problem, reducing reliance on random initializations as well as domain knowledge. Exploration-exploitation trade off can be better balanced by carefully selecting max number of nodes, and saving the best model.
330
+
331
+ ## 7.2 Reflection: What Was Easy? What Was Difficult?
332
+
333
+ The architecture of GNNInterpreter was clearly described in the paper, both intuitively and formally. The set-up and objectives of each experiment were also explicitly mentioned with enough level of detail. Despite not being linked to the paper, the original code implementation was publicly available on one of the author's personal GitHub. Beyond that, model checkpoints and the code for generating the synthetic datasets was also available together with some general experimental notebooks. However, it was not trivial to recreate the experiments. The main reasons for that include the perplexing structure of the code base, the lack of documentation, the abundance of unused or erroneous code, but more importantly the numerous discrepancies we found regarding implementation details and hyperparameters values between the paper and the open-source implementation.
334
+
335
+ ## 7.3 Communication With Original Authors
336
+
337
+ We have contacted the original authors of the paper seeking clarification on various aspects of the paper and the associated code. Specifically, we included topics like confirming the official repository and recent updates, identifying potential bugs in the code base, understanding the configuration of regularization weights, confirming seed usage and averaging in reporting results as well as resolving mismatches between hyperparameter values in the appendix and code repository. Moreover, we have also made inquiries about certain aspects that were present in the code but not mentioned in the paper, such as thresholding qualitative results and adding a mean penalty to the weighting criterion. Unfortunately, we have not received any response.
338
+
339
+ ## References
340
+
341
+ Daniel W Apley and Jingyu Zhu. Visualizing the effects of predictor variables in black box supervised learning models. *Journal of the Royal Statistical Society Series B: Statistical Methodology*, 82(4):1059–1086, 2020.
342
+
343
+ Federico Baldassarre and Hossein Azizpour. Explainability techniques for graph convolutional networks.
344
+
345
+ arXiv preprint arXiv:1905.13686, 2019.
346
+
347
+ Enyan Dai and Suhang Wang. Towards self-explainable graph neural network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 302–311, 2021.
348
+
349
+ Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and Corwin Hansch.
350
+
351
+ Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. *Journal of medicinal chemistry*, 34(2):786–797, 1991.
352
+
353
+ Alexandre Duval and Fragkiskos D Malliaros. Graphsvx: Shapley value explanations for graph neural networks. In *Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part II 21*, pp. 302–318.
354
+
355
+ Springer, 2021.
356
+
357
+ Jerome H. Friedman. Greedy function approximation: A gradient boosting machine. *The Annals of Statistics*, 29(5):1189 - 1232, 2001. doi: 10.1214/aos/1013203451. URL https://doi.org/10.1214/aos/
358
+ 1013203451.
359
+
360
+ Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. *Advances in neural information processing systems*, 32, 2019.
361
+
362
+ Edgar N Gilbert. Random graphs. *The Annals of Mathematical Statistics*, 30(4):1141–1144, 1959. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry, 2017.
363
+
364
+ Giles Hooker. Generalized functional anova diagnostics for high-dimensional functions of dependent variables.
365
+
366
+ Journal of Computational and Graphical Statistics, 16(3):709–732, 2007.
367
+
368
+ Jaykumar Kakkad, Jaspal Jannu, Kartik Sharma, Charu Aggarwal, and Sourav Medya. A survey on explainability of graph neural networks. *arXiv preprint arXiv:2306.01958*, 2023.
369
+
370
+ Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks, 2017. Wanyu Lin, Hao Lan, and Baochun Li. Generative causal explanations for graph neural networks. In International Conference on Machine Learning, pp. 6666–6679. PMLR, 2021.
371
+
372
+ Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. Parameterized explainer for graph neural network. *Advances in neural information processing systems*, 33:
373
+ 19620–19631, 2020a.
374
+
375
+ Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. Parameterized explainer for graph neural network. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 19620–19631. Curran Associates, Inc., 2020b. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/
376
+ e37b08dd3015330dcbb5d6663667b8b8-Paper.pdf.
377
+
378
+ Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. *arXiv preprint arXiv:1611.00712*, 2016.
379
+
380
+ Lucie Charlotte Magister, Dmitry Kazhdan, Vikash Singh, and Pietro Liò. Gcexplainer: Human-in-the-loop concept-based explanations for graph neural networks. *arXiv preprint arXiv:2107.11889*, 2021.
381
+
382
+ Christopher Morris, Nils M. Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann.
383
+
384
+ Tudataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020. URL www.graphlearning.io.
385
+
386
+ Robert Schwarzenberg, Marc Hübner, David Harbecke, Christoph Alt, and Leonhard Hennig. Layerwise relevance visualization in convolutional text graph classifiers. *arXiv preprint arXiv:1909.10911*, 2019.
387
+
388
+ Caihua Shan, Yifei Shen, Yao Zhang, Xiang Li, and Dongsheng Li. Reinforcement learning enhanced explainer for graph neural networks. *Advances in Neural Information Processing Systems*, 34:22523–22533, 2021.
389
+
390
+ Carolin Strobl, Anne-Laure Boulesteix, Achim Zeileis, and Torsten Hothorn. Bias in random forest variable importance measures: Illustrations, sources and a solution. *BMC bioinformatics*, 8(1):1–21, 2007.
391
+
392
+ Carolin Strobl, Anne-Laure Boulesteix, Thomas Kneib, Thomas Augustin, and Achim Zeileis. Conditional variable importance for random forests. *BMC bioinformatics*, 9:1–11, 2008.
393
+
394
+ Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio.
395
+
396
+ Graph attention networks. *arXiv preprint arXiv:1710.10903*, 2017.
397
+
398
+ Minh Vu and My T Thai. Pgm-explainer: Probabilistic graphical model explanations for graph neural networks. *Advances in neural information processing systems*, 33:12225–12235, 2020.
399
+
400
+ Xiaoqi Wang and Han-Wei Shen. Gnninterpreter: A probabilistic generative model-level explanation for graph neural networks. *arXiv preprint arXiv:2209.07924*, 2022.
401
+
402
+ Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks?
403
+
404
+ arXiv preprint arXiv:1810.00826, 2018.
405
+
406
+ Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In *Proceedings of the 21th ACM SIGKDD*
407
+ international conference on knowledge discovery and data mining, pp. 1365–1374, 2015.
408
+
409
+ Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. Gnnexplainer: Generating explanations for graph neural networks. *Advances in neural information processing systems*, 32, 2019a.
410
+
411
+ Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. Gnnexplainer: Generating explanations for graph neural networks. *Advances in neural information processing systems*, 32, 2019b.
412
+
413
+ Hao Yuan, Jiliang Tang, Xia Hu, and Shuiwang Ji. Xgnn: Towards model-level explanations of graph neural networks. In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &*
414
+ Data Mining, pp. 430–438, 2020.
415
+
416
+ Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, and Shuiwang Ji. On explainability of graph neural networks via subgraph explorations. In *International conference on machine learning*, pp. 12241–12252. PMLR, 2021.
417
+
418
+ Hao Yuan, Haiyang Yu, Shurui Gui, and Shuiwang Ji. Explainability in graph neural networks: A taxonomic survey. *IEEE transactions on pattern analysis and machine intelligence*, 45(5):5782–5799, 2022.
419
+
420
+ Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. Advances in neural information processing systems, 31, 2018.
421
+
422
+ Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018.
423
+
424
+ Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. Protgnn: Towards self-explaining graph neural networks. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 9127–9135, 2022.
425
+
426
+ ## 8 Appendix A Original Hyperparameter Values
427
+
428
+ Table 7 gives the regularization weights reported in the original paper, while Table 8 gives the values from the experimental notebooks found in the authors' GitHub repository.
429
+
430
+ | Dataset | Class | Regularization Weights RL1 RL2 Rb Rc | | | |
431
+ |------------|--------------|----------------------------------------|-------|------|----|
432
+ | MUTAG | Mutagen | 10 | 5 | 20 | 1 |
433
+ | Nonmutagen | 5 | 2 | 10 | 2 | |
434
+ | Red Cyclic | 10 | 5 | 10000 | 100 | |
435
+ | Cyclicity | Green Cyclic | 10 | 5 | 2000 | 50 |
436
+ | Acyclic | 10 | 2 | 5000 | 50 | |
437
+ | Motif | House | 1 | 1 | 5000 | 0 |
438
+ | House-X | 5 | 2 | 2000 | 0 | |
439
+ | Complete-4 | 10 | 5 | 10000 | 1 | |
440
+ | Complete-5 | 10 | 5 | 10000 | 5 | |
441
+ | Shape | Lollipop | 5 | 5 | 1 | 5 |
442
+ | Wheel | 10 | 5 | 10 | 0 | |
443
+ | Grid | 1 | 1 | 2 | 0 | |
444
+ | Star | 10 | 2 | 200 | 0 | |
445
+
446
+ | Dataset | Class | Regularization Weights RL1 RL2 Rb Rc | | | |
447
+ |------------|--------------|----------------------------------------|----|----|----|
448
+ | MUTAG | Mutagen | 1 | 1 | 1 | 0 |
449
+ | Nonmutagen | 1 | 1 | 1 | 0 | |
450
+ | Red Cyclic | 2 | 2 | 1 | 5 | |
451
+ | Cyclicity | Green Cyclic | 2 | 2 | 1 | 5 |
452
+ | Acyclic | 2 | 2 | 1 | 5 | |
453
+ | Motif | House | 1 | 1 | 1 | 0 |
454
+ | House-X | 1 | 1 | 1 | 0 | |
455
+ | Complete-4 | 1 | 1 | 1 | 0 | |
456
+ | Complete-5 | 1 | 1 | 1 | 0 | |
457
+ | Shape | Lollipop | 1 | 1 | 0 | 15 |
458
+ | Wheel | 1 | 1 | 0 | 10 | |
459
+ | Grid | 1 | 1 | 0 | 20 | |
460
+ | Star | 1 | 1 | 0 | 10 | |
461
+
462
+ Table 7: Values mentioned in the original paper for the regularization weights of GNNInterpreter.
463
+
464
+ Table 8: Values found in the authors' original experimental notebooks for the regularization weights of GNNInterpreter.
465
+
466
+ ## B Tests On Provided Hyperparameters
467
+
468
+ Table 9 and Table 10 below display a small-scale version of our experimental results averaged over 5 random seeds, where each dataset has been tested using the paper and the notebook parameters respectively. Important to note is that we didn't perform the same complete analysis with reporting good and bad models, the purpose of this set-up being solely to motivate the necessity of the hyperparameter tuning for obtaining the main results. The paper parameters result in empty graphs for Motif dataset.
469
+
470
+ | Dataset | Predicted Class Probability by GNN | | | |
471
+ |---------------|--------------------------------------|---------------|---------------|-------------|
472
+ | MUTAG | Mutagen: | Nonmutagen: | | |
473
+ | 1.000 ± 0.000 | 0.9990 ± 0.002 | | | |
474
+ | Cyclicity | Red Cyclic: | Green Cyclic: | Acyclic: | |
475
+ | 0.200 ± 0.000 | 0.580 ± 0.056 | 0.730 ± 0.095 | | |
476
+ | Motif | House: | House-X: | Complete-4: | Complete-5: |
477
+ | 0 ± 0.000 | 0.000 ± 0.000 | 0.000 ± 0.000 | 0.000 ± 0.000 | |
478
+ | Shape | Lollipop: | Wheel: | Grid: | Star: |
479
+ | 0.000 ± 0.000 | 0.000 ± 0.000 | 0.000 ± 0.000 | 0.000 ± 0.000 | |
480
+
481
+ | Dataset | Predicted Class Probability by GNN | | | |
482
+ |----------------|--------------------------------------|-----------------|---------------|-------------|
483
+ | MUTAG | Mutagen: | Nonmutagen: | | |
484
+ | 1.000 ± 0.000 | 0.9712 ± 0.058 | | | |
485
+ | Cyclicity | Red Cyclic: | Green Cyclic: | Acyclic: | |
486
+ | 1.000 ± 0.000 | 0.490 ± 0.476 | 0.447 ± 0.113 | | |
487
+ | Motif | House: | House-X: | Complete-4: | Complete-5: |
488
+ | 0.721 ± 0.195 | 0.181 ± 0.048 | 0.347 ± 0.096 | 0.225 ± 0.028 | |
489
+ | Shape | Lollipop: | Wheel: | Grid: | Star: |
490
+ | 0.7935 ± 0.053 | 0.8965 ± 0.27 | 0.0881 ± 0.0988 | 0.9806 ± 0.14 | |
491
+
492
+ Table 9: The reproduced quantitative results for the 4 datasets using the hyperparameters in the original paper, averaged over 5 different seeds.
493
+
494
+ Table 10: The reproduced quantitative results for the 4 datasets using the hyperparameters in the original notebooks, averaged over 5 different seeds.
495
+
496
+ | Dataset | Class | Regularization Weights RL1 RL2 Rb Rc | | | |
497
+ |------------|--------------|----------------------------------------|----|----|----|
498
+ | MUTAG | Mutagen | 10 | 5 | 20 | 1 |
499
+ | Nonmutagen | 5 | 2 | 10 | 2 | |
500
+ | Red Cyclic | 2 | 2 | 1 | 5 | |
501
+ | Cyclicity | Green Cyclic | 2 | 2 | 1 | 5 |
502
+ | Acyclic | 2 | 2 | 1 | 5 | |
503
+ | Motif | House | 1 | 1 | 1 | 0 |
504
+ | House-X | 1 | 1 | 1 | 0 | |
505
+ | Complete-4 | 1 | 1 | 1 | 0 | |
506
+ | Complete-5 | 1 | 1 | 1 | 0 | |
507
+ | Shape | Lollipop | 1 | 1 | 0 | 15 |
508
+ | Wheel | 1 | 1 | 0 | 10 | |
509
+ | Grid | 1 | 1 | 0 | 50 | |
510
+ | Star | 1 | 1 | 0 | 20 | |
511
+
512
+ ## C Tuned Hyperparameter Values
513
+
514
+ Table 11 reports the regularization weights used for the quantitative and qualitative evaluations presented in the body of this paper.
515
+
516
+ Table 11: Refined regularization weights of GNNInterpreter for explaining the GNN models corresponding to each dataset.
517
+
518
+ ## D Multiple Explanation Graphs Per Class Per Dataset
519
+
520
+ Table 14 below augments the qualitative results reported in the paper with a few extra explanation graphs that were generated during the reproducibility study for each class in each dataset.
521
+
522
+ | Dataset | Class | Explanation Graphs |
523
+ |------------------------|----------------------------|----------------------|
524
+ | MUTAG | Mutagen | |
525
+ | Non-mutagen Red Cyclic | | |
526
+ | Cyclicity | Green Cyclic Acyclic House | |
527
+ | Motif | House-X | |
528
+ | Complete-4 Complete-5 | | |
529
+ | Continued on next page | | |
530
+
531
+ ![19_image_0.png](19_image_0.png)
532
+
533
+ ## E Dynamic Weight Of Budget Penalty And Early Stopping
534
+
535
+ The official code repository of the original paper dynamically adjusts the weight of the budget penalty regularization during training, a detail overlooked in the original paper. The weight of the budget penalty decreases until the model achieves a target class probability of 0.9, after which it begins to increase again.
536
+
537
+ This mechanism aims to enlarge the graph until the desired class probability is achieved, and subsequently reduce its size to produce a more concise explanation graph. The 0.9 target class probability criteria was determined by the original authors in the official code repository, which is employed alongside a maximum number of nodes criteria for early stopping purposes.
538
+
539
+ The official code repository featured two distinct implementations for dynamically adjusting the weight of the budget penalty. In the most recent commit, the weight was increased by multiplying it by 1.1 and decreased by multiplying it by 0.95 at each iteration, with the budget penalty weight initialized to 1. In contrast, the earliest commit in the repository modified the dynamic weight by adding and subtracting 0.1 from the budget penalty weight per iteration. In this latter version, the initial budget penalty weight could be adjusted via a parameter.
540
+
541
+ We experimented with the two versions of the repository, as well as with removing the dynamic weight penalty and early stopping criteria entirely. Results varied across datasets: Mutag showed similar class probabilities and graph sizes to the original paper with both versions, while the Cyclicity dataset performed better on the latest version, yielding smaller graphs compared to the oldest version. Conversely, Motif and Shape datasets performed significantly better by getting smaller explanation graphs on the oldest version of the repository. For example, Motif dataset gets explanation graphs of size 80 to 100 in the latest version, while the oldest version resulted in graphs of size 20. Size 20 is still higher than the results of the original paper, but it is much closer and the smallest size we can achieve.
542
+
543
+ Finally, we tried removing the dynamic weight penalty and the early stopping criteria all together to prevent the oscillation problem we mentioned in the discussion section. Instead of using the model from the latest iteration, we started using the model with the minimum loss. This change decreased effect the unstable training had on our results. However, we got much larger explanation graphs of size 100 for Motif dataset, when we removed the dynamic weight which led us to not use this version.
544
+
545
+ ## F Synthetic Dataset Generation
546
+
547
+ The synthetic datasets used in this reproducibility study were generated following the identical procedures outlined in the original paper. These procedures are detailed for each dataset in Figure 1 below, using the algorithmic pseudocodes extracted from the original paper.
548
+
549
+ ![20_image_0.png](20_image_0.png)
550
+
551
+ Algorithm 4: Motif dataset generation procedure Node Classes: {RED, GREEN, ORANGE, BLUE, MAGENTA} Graph Classes: {HOUSE, HOUSE-X, COMPLETE-4, COMPLETE-5, OTHERS}
552
+ Input: Rome Dataset Output: A collection of pairs consisting of a graph and its label 1 Let {GHOUSE, GHOUSE X, GCOMMITI: 4, GCOMMITI: 5} be 4 motifs for the corresponding classes.
553
+
554
+ 2 Define D as a graph operator such that G1 G2 generates a union graph G1 U G2 with an additional edge between a random node in G1 and a random node in G2.
555
+
556
+ 3 for G in Rome Dataset do Randomly assign a color Chodo E {RED, GREEN, ORANGE, BLUE, MAGENTA} for each node in G
557
+ Randomly select a label L E {OTHERS, HOUSE, HOUSE-X, COMPLETE-4, COMPLETE-5}.
558
+
559
+ ર if L = OTHERS then Let Gonures be a random motif in {Guoust, Guoust-x, GCOMPLETE-4, GCOMPLETE-5} but with a random 7 edge being removed.
560
+
561
+ yield (G GL, L)
562
+ 8
563
+
564
+ ![20_image_1.png](20_image_1.png)
565
+
566
+ Figure 1: Synthetic datasets generation procedures.
8cYcR23WUo/8cYcR23WUo_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 21,
6
+ "ocr_stats": {
7
+ "ocr_pages": 2,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 2,
10
+ "ocr_engine": "surya"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 21,
14
+ "code": 0,
15
+ "table": 8,
16
+ "equations": {
17
+ "successful_ocr": 15,
18
+ "unsuccessful_ocr": 0,
19
+ "equations": 15
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }