RedTachyon
commited on
Commit
•
64e6292
1
Parent(s):
73effdd
Upload folder using huggingface_hub
Browse files- p9KSFrTLx0/10_image_0.png +3 -0
- p9KSFrTLx0/11_image_0.png +3 -0
- p9KSFrTLx0/12_image_0.png +3 -0
- p9KSFrTLx0/13_image_0.png +3 -0
- p9KSFrTLx0/14_image_0.png +3 -0
- p9KSFrTLx0/14_image_1.png +3 -0
- p9KSFrTLx0/15_image_0.png +3 -0
- p9KSFrTLx0/15_image_1.png +3 -0
- p9KSFrTLx0/16_image_0.png +3 -0
- p9KSFrTLx0/16_image_1.png +3 -0
- p9KSFrTLx0/16_image_2.png +3 -0
- p9KSFrTLx0/1_image_0.png +3 -0
- p9KSFrTLx0/p9KSFrTLx0.md +799 -0
- p9KSFrTLx0/p9KSFrTLx0_meta.json +25 -0
p9KSFrTLx0/10_image_0.png
ADDED
Git LFS Details
|
p9KSFrTLx0/11_image_0.png
ADDED
Git LFS Details
|
p9KSFrTLx0/12_image_0.png
ADDED
Git LFS Details
|
p9KSFrTLx0/13_image_0.png
ADDED
Git LFS Details
|
p9KSFrTLx0/14_image_0.png
ADDED
Git LFS Details
|
p9KSFrTLx0/14_image_1.png
ADDED
Git LFS Details
|
p9KSFrTLx0/15_image_0.png
ADDED
Git LFS Details
|
p9KSFrTLx0/15_image_1.png
ADDED
Git LFS Details
|
p9KSFrTLx0/16_image_0.png
ADDED
Git LFS Details
|
p9KSFrTLx0/16_image_1.png
ADDED
Git LFS Details
|
p9KSFrTLx0/16_image_2.png
ADDED
Git LFS Details
|
p9KSFrTLx0/1_image_0.png
ADDED
Git LFS Details
|
p9KSFrTLx0/p9KSFrTLx0.md
ADDED
@@ -0,0 +1,799 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Mixture Degree-Corrected Stochastic Block Model For Multigroup Community Detection In Multiplex Graphs
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Multiplex graphs have emerged as a powerful tool for modeling complex data structures due to their ability to handle multiple relational layers. Clustering within a multiplex graph can involve merging vertices into communities that are consistent across all layers, grouping similar layers into clusters, or creating overlapping clusters among vertices and layers. However, a multiplex graph may exhibit distinct vertex communities based on the specific layers to which a vertex is connected. This scenario, termed multi-group community detection, significantly enhances the accuracy of clustering processes and aids in the interpretation of partitions.
|
8 |
+
|
9 |
+
To date, the current literature on state-of-the-art community detection has not extensively addressed this modeling approach. In this paper, we introduce a novel methodology referred to as the "Mixture Degree-Corrected Stochastic Block Model." This generative model, an extension of the widely utilized Degree-Corrected Stochastic Block Model (DCSBM),
|
10 |
+
is designed to cluster similar layers by their community structures while simultaneously identifying communities within each layer's group. We provide a rigorous definition of the model and utilize an iterative technique to perform inference computations. Furthermore, we assess the identifiability of our proposed model and demonstrate the consistency of the maximum likelihood function through analytical analysis. The effectiveness of our method is evaluated using both real-word data sets and synthetic graphs.
|
11 |
+
|
12 |
+
## 1 Introduction
|
13 |
+
|
14 |
+
Recent technological advancements have precipitated an exponential increase in data accumulation, conducting to the era of big data. This new ecosystem presents novel challenges in exploring and analyzing extensive datasets, as highlighted in recent literature Elgendy & Elragal (2014). Often, data comes from multiple sources, presenting multiple perspectives from diverse sources and features. Such datasets require sophisticated models that capture intricate relationships and interdependencies across different data types and sources Devagiri et al. (2021); Niu et al. (2016). To address the increasing complexity, multiplex graphs have become a relevant asset Hammoud & Kramer (2020); Zweig (2016).
|
15 |
+
|
16 |
+
A multiplex graph consists of a set of vertices connected across multiple layers Han et al. (2023); Magnani et al.
|
17 |
+
|
18 |
+
(2021). Each layer in a multiplex graph uniquely represents a set of edges that models specific similarities between vertices, thereby serving as an effective model for multi-relational data. Additionally, multiplex graphs are adept at modeling time-varying data, offering a robust framework for dynamic data analysis Xia et al. (2020).
|
19 |
+
|
20 |
+
Clustering has long been recognized as an effective means for understanding and exploring data across various domains by identifying groups of individuals with strong similarities Fortunato (2010a); Bedi & Sharma
|
21 |
+
(2016). In multiplex graphs, the clustering process poses a significant challenge, yet offers promising solutions for analyzing complex data structures Wang et al. (2019). Community detection in multiplex graphs aims to identify groups characterized by high intra-connectivity and low inter-connectivity Fortunato (2010b).
|
22 |
+
|
23 |
+
To this end, numerous algorithms employing diverse approaches such as optimization De Meo et al. (2011);
|
24 |
+
Que et al. (2015), spectral computation Li et al. (2018), consensus clustering Mandaglio et al. (2018), and inference Shuo & Chai (2016) have been developed.
|
25 |
+
|
26 |
+
![1_image_0.png](1_image_0.png)
|
27 |
+
|
28 |
+
Figure 1: Presentation of a multiplex graph with 4 layers. The results of community detection algorithms on a multiplex graph are categorized into three distinct types of groupings. Each grouping is represented by color-coded boxes, with each color signifying a different community. In figure (a), unified communities are depicted, showing a single community structure shared across all layers. Figure (b) presents a visualization of overlapping communities, where communities extend over several but not all layers. Finally, figure (c)
|
29 |
+
displays multi-group communities, in which similar layers are clustered together, and each cluster maintains a uniform community partition.
|
30 |
+
The results of these methods can generally be categorized into two groups. The first group optimizes communities to be identical and unified across all layers, which may overlook the inherent diversity within each layer, as presented in (a) from Figure 1. The second approach of multilayer communities allows for overlapping communities across layers and vertices, where vertices may belong to different communities in different layers, as presented in (b) from Figure 1. This model, while computationally intensive, also poses challenges in interpretation and constrains the re-computation of partitions when a new layer arrives, especially as the number of layers increases.
|
31 |
+
|
32 |
+
In this paper, we propose a novel approach for partitioning multiplex graphs, where layers are clustered into groups, and vertices within each group are organized into communities that remain consistent across all layers, as presented in the third clustering outcome in (c) from Figure 1. This multi-group perspective not only captures the intrinsic structure of each layer but also simplifies the interpretation of the results.
|
33 |
+
|
34 |
+
Additionally, this method proves advantageous when integrating new layers post-partitioning, as it allows for the incorporation of these layers into existing partitions or the creation of new partitions if necessary.
|
35 |
+
|
36 |
+
Moreover, clustering multiplex layers into groups of similar networks has become an active research area, treating each layer as an individual within a population of networks Mantziou et al. (2023); van der Laan et al. (2022). However, previous approaches have often treated the clustering of layers and the partitioning of vertices independently, neglecting the structural differences in community configurations when layers are clustered. Our dual clustering approach thus aims to enhance both layer clustering and community detection within multiplex graphs, providing a comprehensive solution to the challenges posed by complex network structures.
|
37 |
+
|
38 |
+
Although multi-group community detection has not been widely addressed in the literature, its relevance is apparent in real-world datasets. For example, individuals sharing similar musical tastes may not necessarily align in their political or sports affiliations, that proving the structural diversity of layers. In the financial sector, multiplex graphs represent various transaction types within banking systems—such as online, wire, and ATM transactions. Grouping these transactions by similar dynamics and identifying communities within these groups are crucial for detecting anomalous behaviors and preventing fraud. Additionally, as we present in experiment section in the field of neuroscience, the study of brain connectivity through magnetic resonance imaging offers profound insights. Our experiments demonstrate that this approach can effectively group subjects with similar neurological diagnoses, illustrating the practical applications of multi-group community detection in medical research.
|
39 |
+
|
40 |
+
We introduce a new approach for Multi-group community detection based on the Stochastic Block Model
|
41 |
+
(SBM), a generative model extensively developed for community detection, which recognizes the presence of communities within a graph by defining the probability of edges between vertices based on the communities to which they belong Lee & Wilkinson (2019). SBM has been applied to various graph types including single-layer Abbe (2017), multiplex Barbillon et al. (2017), and dynamic networks Corneli (2017). The Degree-Corrected Stochastic Block Model (DCSBM), an advancement of SBM, addresses the heterogeneity of vertex degrees within communities, relaxing the assumption of uniform degree distribution Karrer & Newman
|
42 |
+
(2011). Despite its extensive application in modeling single partitions on multiplex graphs, SBM often fails to capture the diversity of clusters within such graphs, which can lead to suboptimal community detection. In response, this paper introduces the Mixture DCSBM (MDCSBM), a novel model for multi-group partitioning of a multiplex graph. Our approach **groups similar layers and partitions the vertices of each layer**
|
43 |
+
within these groups into distinct blocks, as depicted on the right of Figure 1. We hypothesize that layers within the same group share an identical DCSBM distribution. To facilitate this dual clustering, we employ the Estimation Maximization (EM) algorithm to determine the groupings of layers, and the Variational Estimation Maximization (VEM) technique to assign vertices to their respective blocks within each group.
|
44 |
+
|
45 |
+
Throughout this document, we use the term "partitioning" to refer to the formation of vertex communities and "clustering" to describe the grouping of layers.
|
46 |
+
|
47 |
+
## 2 Related Works
|
48 |
+
|
49 |
+
Community detection in multiplex graphs can yield diverse types of partitions. For instance, some approaches employ spectral representation-based algorithms and tensor-based algorithms that fuse layers into a centroid graph. This graph is constrained to have K connected components, each representing a community, where K is the number of clusters aimed at capturing highly consistent communities Kang et al. (2019); Wang et al.
|
50 |
+
|
51 |
+
(2019); Papalexakis et al. (2016). Other methods focus on consensus clustering, which involves computing communities for each layer individually and then identifying the most consistent community across all layers for multiplex community detection Berlingerio et al. (2013); Tagarelli et al. (2017); Tang et al. (2012).
|
52 |
+
|
53 |
+
Additionally, algorithms originally designed for mono-layer community detection have been adapted for multiplex graphs, often allowing for overlapping between vertices and layers to form multilayer communities, such as Generalized Modularity and Multilayer Infomap Mucha et al. (2010); Afsarmanesh & Magnani (2016); De Domenico et al. (2015); Wilson et al. (2017). Recent research has also focused on identifying clusters of similar layers that may exhibit the same structure van der Laan et al. (2022). In this approach, each graph is treated as an individual within a population of networks. Various algorithms aim to identify a set of similar layers using methods such as mixture parametric models Mantziou et al. (2023); Kemp et al. (2006), nonparametric methods derived from minimum code length descriptions Kirkley et al. (2023); Kirkley & Newman (2022), models based on graph distance measurements La Rosa et al. (2015), and latent space models Young et al. (2022).
|
54 |
+
|
55 |
+
However, these models, which aim to partition the graph into communities, often overlook the potential for multiple clusters within the multiplex graph and do not consider the similarity of communities between different layers. This dual consideration is crucial for enhancing the accuracy and comprehensibility of results, especially as the number of layers increases. In the filed of tabular data, the dual clustering of both features and samples—referred to as co-clustering—has been extensively explored and has proven beneficial for in-depth data analysis Ailem et al. (2015); Nadif & Govaert (2010). Thus, bridging this gap in graph-type data is crucially relevant.
|
56 |
+
|
57 |
+
The Stochastic Block Model (SBM) is a generative probabilistic model widely used for community detection in networks. Originally developed for single-layer networks, SBM has been extended to accommodate multi-layer clustering. A notable example is the introduction of the Multi-Layer Stochastic Block Model (MLSBM),
|
58 |
+
which incorporates various types of layer aggregation Vallè s-Català et al. (2016). In contrast, a directed graph model with layers generated independently from the same distribution was explored in De Bacco et al.
|
59 |
+
|
60 |
+
(2017). Furthermore, Paul & Chen (2016) adapted SBM by assigning each layer its own affinity matrix, yet constrained these matrices to yield consistent communities across layers. Additional developments include the work of Amini et al. (2024); Roy et al. (2006), which focuses on a hierarchical generalization of SBM.
|
61 |
+
|
62 |
+
This approach seeks to uncover underlying community structures by considering the hierarchical nature of real-world networks. Other adaptations of MLSBM utilize techniques such as Variational Estimation Maximization to infer multi-layer graph structures Barbillon et al. (2017); Corneli et al. (2016); Han et al.
|
63 |
+
|
64 |
+
(2015). However, these models often grapple with the challenge of exponential parameter scaling, which complicates their application to real-world graphs. Moreover, they generally lack mechanisms for constraining the formation of multi-group structures within the multiplex graph, a critical aspect for capturing the complexity of such networks.
|
65 |
+
|
66 |
+
In this paper, we present a novel extension of the DCSBM model that jointly represents the affiliation of each layer to a group (set of layers), and vertices within each group are assigned to blocks (set of vertices).
|
67 |
+
|
68 |
+
We employ the EM-VEM technique to infer the model's parameters and estimate the assignment variables effectively, offering an improved approach for community detection in multiplex graphs.
|
69 |
+
|
70 |
+
## 3 Mixture Dcsbm
|
71 |
+
|
72 |
+
This work aims to achieve a dual clustering process by grouping similar layers into a group of layers, and vertices within each group are further clustered into a shared block of vertices. The estimation process of MDCSBM involves determining layer-to-group variables to identify the group of each layer. It is assumed that edges within each group follow a unique DCSBM distribution, treating layers within the same group as independently sampled from the same distribution. Therefore, vertex-to-block variables are estimated for each group to identify the block of each vertex within the same group.
|
73 |
+
|
74 |
+
## 3.1 Model Definition
|
75 |
+
|
76 |
+
Consider a multiplex graph denoted as G = {G1*, ...G*L} comprising L layer, Gl = {*V, E*l} represents a single layer, where l, s.t l ∈ [1, L] indicates the layer index, V indicates the set of vertices with |V | = N, Elthe set of edges within layer l. Let A = {A1*, ..., A*L} be the corresponding adjacency matrix of multiplex graph G,
|
77 |
+
where Alstands for the adjacency matrix of the graph Gl. The underlying graph model in this study is an unweighted and undirected multiplex graph, where the edge distributions follow a Bernoulli distribution. The generalization of this model to a directed graph is straightforward. Finally, an edge Alij is defined by a dyad i, j, representing its extremity. Let us consider partitioning multiplex's graph layers into K groups and assume that the vertices of group k are divided into Qk blocks, where k ∈ [1, K]. The probability of having an edge Ali,j in layer l, within group k, giving the block assigned to each vertex, under MDCSBM model is expressed as follows:
|
78 |
+
|
79 |
+
$$P(A_{i,j}^{l}|{\bf Z}^{k},{\bf\Pi}^{k},{\bf\Theta}^{k})=\theta_{i}^{k}\theta_{j}^{k}\pi_{Z_{i}^{k},Z_{j}^{k}}^{k}$$
|
80 |
+
j(1)
|
81 |
+
where Z
|
82 |
+
k = {Z
|
83 |
+
k 1
|
84 |
+
, ...ZkN } is the set of vertex-to-block assignments in the group k and Z
|
85 |
+
k i ∈ {1*, ..., Q*k},
|
86 |
+
Θk = {θ k 1
|
87 |
+
, ..., θkN } is the set of degree heterogeneity parameter for vertices in group k, s.t θ k P
|
88 |
+
i > 0 and i,Zk i =q θ k i = 1∀i ∈ V, ∀q ∈ [1, Qk]. The matrix Πk has Qk × Qkelements π k q,w, ∀q, w ∈ {1*, ..., Q*k}
|
89 |
+
2. Each element represents the probability of an edge existing within the group k, depending on the block of its dyad
|
90 |
+
{*i, j*}.
|
91 |
+
|
92 |
+
Let consider a given layer Gl with known vertex-to-block variable assignments for each group Z = {Z
|
93 |
+
1*, ...,* Z
|
94 |
+
K}.
|
95 |
+
|
96 |
+
The probability of existing edge Alij between dyad (*i, j*) from the MDCSBM model, conditioned on Z, can be described as a mixture distribution of K independent DCSBMs, expressed as:
|
97 |
+
|
98 |
+
P(A l ij = 1|Z; β, Π, Θ) = X K k=1 β kθ k i θ k j π k Zk i ,Zk j s.tX k β k = 1, i,Zk i =q θ k i = 1, ∀q ∈ [1, Qk], ∀k ∈ [1, K] X
|
99 |
+
$$(1)$$
|
100 |
+
|
101 |
+
$$(2)^{\frac{1}{2}}$$
|
102 |
+
where β = {β 1*, ..., β*K} denotes the set of probabilities of layer l to be generated from group k, representing the mixture weights of MDCSBM, Π = {Π1, Π2*, ...,* ΠK} is the set of parameters of each group, and Θ = {Θ1, Θ2*, ...,* ΘK} is the set of degree heterogeneity for each group. This model incorporates K
|
103 |
+
distributions from which layers can be generated.
|
104 |
+
|
105 |
+
To address the challenge of maximizing the log-likelihood function due to the sum in the mixture model, we introduce a set of latent variables Y that represents the layer-to-group assignment. Specifically, ylk, s.t l ∈ [1, L] and k ∈ [1, K], takes the value of one when layer l is generated from group k and zero otherwise.
|
106 |
+
|
107 |
+
The updated formulation for the probability of an existing edge is as follows:
|
108 |
+
|
109 |
+
$$P(A^{l}_{ij}=1|\mathbf{Y},\mathbf{Z};\mathbf{\Pi})=\prod_{k=1}^{K}(\theta^{k}_{i}\theta^{k}_{j}\pi^{k}_{Z^{k}_{i},Z^{l}_{j}})^{y_{lk}}$$ $$P(y_{lk}=1;\beta)=\prod_{k=1}^{K}(\beta^{k})^{y_{lk}}$$
|
110 |
+
$$(3)$$
|
111 |
+
|
112 |
+
Additionally, for any group k, we identify the probability of a vertex i to be assigned to block q as follows:
|
113 |
+
|
114 |
+
$$P(Z_{i}^{k}=q;\alpha^{k})=\alpha_{q}^{k}$$
|
115 |
+
$$s.t\sum_{q=1}^{Q^{k}}\alpha_{q}^{k}=1$$
|
116 |
+
|
117 |
+
$$\left({\bar{5}}\right)$$
|
118 |
+
$$\left(4\right)$$
|
119 |
+
|
120 |
+
such that α = {α1, α2*, ...,* αK} and αk = {α k 1
|
121 |
+
, αk 2
|
122 |
+
, ..., αkQk
|
123 |
+
}.
|
124 |
+
|
125 |
+
Let us consider ∆ = {∆1, ∆2*, ...,* ∆K}, such that ∆k = {Πk, Θk, αk} be the aggregation of group parameters, the log-likelihood of the proposed model is written as follows:
|
126 |
+
|
127 |
+
$${\mathcal{L}}({\mathcal{A}},{\mathbf{Y}},{\mathbf{Z}};{\boldsymbol{\beta}},{\boldsymbol{\theta}})=\sum_{l=1}^{L}\sum_{k=1}^{K}y_{l k}\Big[l n\beta^{k}+{\mathcal{L}}(A^{l},{\mathbf{Z}}^{k};{\boldsymbol{\Delta}}^{k})\Big]$$
|
128 |
+
|
129 |
+
where L(Al, Z
|
130 |
+
k; ∆k) is the complete log-likelihood of layer l in group k with parameters θ k, formulated as follow:
|
131 |
+
|
132 |
+
$$\mathcal{L}(A^{t},\mathbf{Z}^{k};\mathbf{\Delta}^{k})=ln(P(A^{t}|\mathbf{Z}^{k};\mathbf{\Pi}^{k},\mathbf{\Theta}^{k}))+ln(P(\mathbf{Z}^{k};\mathbf{\alpha}^{k}))\tag{6}$$ $$=\sum_{i,j,i\neq j}A^{i}_{ij}ln(\pi^{k}_{Z_{i}Z_{j}})+(1-A^{i}_{ij})ln(1-\pi^{k}_{Z_{i}Z_{j}})+\sum_{i,j,i\neq j}ln(\theta^{k}_{i}\theta^{j}_{j})+\sum_{i=1}ln(\alpha^{k}_{Z_{i}})$$
|
133 |
+
|
134 |
+
The verification of the parameter's identifiability and the assessment of the maximum likelihood consistency have been performed in supplementary materials.
|
135 |
+
|
136 |
+
In the context of inferring information from a given multiplex graph, the primary objectives involve assigning each layer to a specific group k using variable ylk, then assigning each vertex i within group k to a particular block q using variable Z
|
137 |
+
k i
|
138 |
+
, and optimizing the parameters β and ∆.
|
139 |
+
|
140 |
+
## 4 Optimization Of Log Likelihood Function
|
141 |
+
|
142 |
+
As explained previously, the MDCSBM depends on layer-to-group and vertex-to-block assignment variables.
|
143 |
+
|
144 |
+
We set an iterative approach to address this joint assignment clustering challenge. To elaborate, we estimate layer-to-group assignment variables by utilizing the Estimation Maximization (EM) technique. Then, to infer the DCSBM model within each group, we use the Variational EM (VEM) technique. This technique proves its performance in maximizing DCSBM distribution parameters while estimating the latent vertex-to-block variables.
|
145 |
+
|
146 |
+
## 4.1 Estimation Of Layer-To-Group Variables
|
147 |
+
|
148 |
+
The computation of layer-to-group latent variables estimation is derived from equation 5, which defines the complete log-likelihood. The estimation process involves calculating the expectation of the log-likelihood based on the posterior distribution of layer-to-group latent variables, and it can be expressed as follows:
|
149 |
+
|
150 |
+
$$E_{\mathbf{Y}}[{\mathcal{L}}({\mathcal{A}},\mathbf{Y},\mathbf{Z};\beta,\mathbf{\Delta})]=\sum_{l=1}^{L}\sum_{k=1}^{K}E(y_{lk})\Big[ln\beta^{k}+{\mathcal{L}}(A^{l},\mathbf{Z}^{k};\mathbf{\Delta}^{k})\Big]$$
|
151 |
+
$$\mathbf{\partial}^{\prime}\mathbf{\partial}$$
|
152 |
+
$$\}$$
|
153 |
+
|
154 |
+
where E(ylk) is the posterior expectation probability of layer l to be generated from group k, defined as p(ylk|Al, Z
|
155 |
+
k). Using Bayes theorem, the estimation of layer-to-group is computed as follows:
|
156 |
+
|
157 |
+
$$E(y_{lk})=\frac{\beta^{k}P(A^{l},{\bf Z}^{k}|\mathbf{\Delta}^{k})}{\sum_{j}\beta^{j}P(A^{l},{\bf Z}^{j}|\mathbf{\Delta}^{j})}\tag{1}$$
|
158 |
+
|
159 |
+
where P(Al, Z
|
160 |
+
k; ∆k) is written as follows:
|
161 |
+
|
162 |
+
$$P(A^{l},\mathbf{Z}^{k};\mathbf{\Delta}^{k})=P(A^{l}|\mathbf{Z}^{k};\mathbf{\Pi}^{k},\mathbf{\Theta}^{k})P(\mathbf{Z}^{\mathbf{k}};\mathbf{\alpha}^{\mathbf{k}})$$
|
163 |
+
k) (9)
|
164 |
+
The estimation of layer-to-group prioritizes the layer that maximizes the likelihood distribution for a specific group. In order to assign each layer to a single group, the selection of the group is based on the following:
|
165 |
+
|
166 |
+
$$y_{l k}=\operatorname*{argmax}_{j}y_{l j}$$
|
167 |
+
ylj (10)
|
168 |
+
|
169 |
+
## 4.2 Maximization Of Likelihood Parameters And Vertex-To-Block Variables
|
170 |
+
|
171 |
+
Once the layer-to-group assignment is identified, the MDCSBM parameters can be maximized, and the vertex-to-group variable can be estimated too.
|
172 |
+
|
173 |
+
## 4.2.1 Maximization Of Β
|
174 |
+
|
175 |
+
Considering equation 5, the optimization of β involves expressing the complete log-likelihood as follows:
|
176 |
+
|
177 |
+
$$\mathcal{L}(\mathcal{A},\mathbf{Y};\boldsymbol{\beta},\boldsymbol{\Delta})=\sum_{l=1}^{L^{k}}ln\beta^{k}+C(\beta^{k})$$ $$s.t\sum_{k=1}^{K}\beta^{k}=1$$
|
178 |
+
$\uparrow$)!
|
179 |
+
$$\mathbf{\Sigma}$$
|
180 |
+
|
181 |
+
$$(11)$$
|
182 |
+
$$\left(12\right)$$
|
183 |
+
|
184 |
+
where L
|
185 |
+
k = {l ∈ [1, L]*, s.t y*lk = 1}, set of layer of group k, and C(β k) defined as a constant regarding on β k.
|
186 |
+
|
187 |
+
By employing the Lagrange multiplier approach, the solution that satisfies the Karush-Kuhn-Tucker (KKT) conditions can be expressed as follows:
|
188 |
+
|
189 |
+
$$\beta^{k}=\frac{N^{k}}{N}\tag{1}$$
|
190 |
+
|
191 |
+
where Nk = |L
|
192 |
+
k| is the number of layers in the groups k.
|
193 |
+
|
194 |
+
## 4.2.2 Estimation Of Vertex-To-Block And Maximization Of Parameter ∆K
|
195 |
+
|
196 |
+
Our mathematical models assume that the layers within the same group are generated independently from the same DCSBM distribution specific to that group. Let Akthe multiplex graph that contains the layers of group k, based on equation 6, the log-likelihood of group k is expressed as follows:
|
197 |
+
|
198 |
+
$$L(\mathcal{A}^{k},\mathbf{Z}^{k};\mathbf{\Delta}^{k})=\sum_{i\in L^{k}}\sum_{i,j,i\neq j}ln(\theta_{i}^{k}\theta_{j}^{k})+\sum_{i=1}ln(\alpha_{Z_{i}}^{k})+\sum_{i\in L^{k}}\sum_{i,j,i\neq j}A_{ij}^{i}ln(\pi_{Z_{i}Z_{j}}^{k})+(1-A_{ij}^{i})ln(1-\pi_{Z_{i}Z_{j}}^{k})$$ $$s.t\sum_{q}^{Q^{k}}\alpha_{q}^{k}=1,\sum_{i,Z_{i}^{k}=q}\theta_{i}^{k}=1$$
|
199 |
+
$$(13)$$
|
200 |
+
|
201 |
+
In order to optimize the parameters that maximize the previous equation, it is essential to first estimate the latent assignment variables. This task is addressed using the Estimation Maximization (EM) algorithm, which requires computing the posterior probability of the latent variable Z
|
202 |
+
k with respect to the observed layers, denoted as P(Z
|
203 |
+
k|Ak). However, for single-layer graphs, it has been demonstrated that calculating this conditional probability is computationally intractable Celisse et al. (2012). Various approaches have been proposed in the literature to tackle this challenge Li et al. (2015); Lee & Wilkinson (2019), but they tend to suffer from the curse of dimensionality, mainly when dealing with large-scale datasets.
|
204 |
+
|
205 |
+
The Variational EM technique has been adopted to address this issue as an alternative technique for handling DCSBM estimation challenges. Previous studies have established the VEM technique's convergence for single-layer SBM and DCSBM models and multiplex SBM graphs Celisse et al. (2012); Barbillon et al. (2017).
|
206 |
+
|
207 |
+
The VEM approach involves approximating the posterior distribution P(Z
|
208 |
+
k|Ak), by another distribution RAk over Z
|
209 |
+
k. By leveraging this approximation, the marginal log-likelihood over Z
|
210 |
+
kcan be expressed as follows:
|
211 |
+
|
212 |
+
$$\begin{split}\mathcal{L}(\mathcal{A}^{k};\,\boldsymbol{\Delta}^{k})&=\sum_{\mathbf{Z}^{k}}R_{\mathcal{A}^{k}}(\mathbf{Z}^{k})\mathcal{L}(\mathcal{A}^{k},\mathbf{Z}^{k};\,\boldsymbol{\Delta}^{k})-\sum_{\mathbf{Z}^{k}}R_{\mathcal{A}^{k}}(\mathbf{Z}^{k})ln\Big{(}R_{\mathcal{A}^{k}}(\mathbf{Z}^{k})\Big{)}+\\ &\mathbf{KL}\big{[}R_{\mathcal{A}^{k}}(\mathbf{Z}^{k}),P(\mathbf{Z}^{k}|\mathcal{A}^{k};\,\boldsymbol{\Delta}^{k})\big{]}\end{split}\tag{14}$$
|
213 |
+
|
214 |
+
where KL is the Kullback-Leibler divergence. Therefore, instead of maximizing L(Ak; θ k) for the observed data, the VEM technique optimizes a lower bound of L(Ak; θ k), denoted as Iθ(RAk ). This lower bound is known as the evidence lower bound, and it can be defined as follows:
|
215 |
+
|
216 |
+
$$\mathcal{I}_{\theta}(R_{A^{k}})=\mathcal{L}(\mathcal{A}^{k};\theta^{k})-\mathbf{KL}\big{[}R_{A^{k}}(\mathbf{Z}^{k}),P(\mathbf{Z}^{k}|\mathcal{A}^{k};\boldsymbol{\Delta}^{k})\big{]}\tag{15}$$ $$=\sum_{\mathbf{Z}^{k}}R_{A^{k}}(\mathbf{Z}^{k})\mathcal{L}(\mathcal{A}^{k},\mathbf{Z}^{k};\boldsymbol{\Delta}^{k})-\sum_{\mathbf{Z}^{k}}R_{A^{k}}(\mathbf{Z}^{k})logR_{\mathcal{A}^{k}}(\mathbf{Z}^{k})$$ $$\leq\mathcal{L}(\mathcal{A}^{k},\mathbf{Z}^{k};\boldsymbol{\Delta}^{k})$$
|
217 |
+
$$\left(16\right)$$
|
218 |
+
|
219 |
+
The equality between the evidence lower bound and the log-likelihood holds when RAk (Z
|
220 |
+
k) is equal to the true posterior distribution P(Z
|
221 |
+
k|Ak; ∆k). Maximizing the lower bound Iθ(RAk ) is equivalent to minimizing the Kullback-Leibler divergence KL-RAk (Z
|
222 |
+
k), P(Z
|
223 |
+
k|Ak; ∆k). Regarding to integer nature of vertex-to-block variables, to approximate the posterior distribution, we select RAk (Z
|
224 |
+
k) as follows:
|
225 |
+
|
226 |
+
$$R_{\mathcal{A}^{k}}(\mathbf{Z}^{k})=\prod_{i=1}^{N}h(\mathbf{Z}_{i}^{k};\tau_{i}^{k})\tag{1}$$
|
227 |
+
|
228 |
+
where h(:; τ k i
|
229 |
+
) is a multinomial distribution with parameters τ = {τ k 1
|
230 |
+
, ...τ kQk }. The entity τ k iq approximates the probability that vertex i belongs to the community q in group k. The Iθ(RGk ) can be writhed as follows:
|
231 |
+
|
232 |
+
$$\begin{array}{l}{{{\mathcal I}_{\theta}(R_{A^{k}})=\sum_{l\in L^{k}}\sum_{i\neq j}\tau_{q q}^{k}\tau_{j q}^{k}\Big[A_{i j}^{l}l n(\tau_{q w}^{k})+(1-A_{i j}^{l})l n(1-\pi_{q w}^{k})\Big]+\sum_{i}\sum_{q}\tau_{i q}^{k}l n(\alpha_{q}^{k})+}}\\ {{{\sum_{i}\sum_{q}\tau_{i q}^{k}l n(\theta_{i}^{k})-\sum_{i}\sum_{q}\tau_{i q}^{k}l n(\tau_{i q}^{k})}}}\end{array}$$
|
233 |
+
$$\quad(17)$$
|
234 |
+
|
235 |
+
7 Algorithm 1 Inference of Likelihood of MDCSBM
|
236 |
+
Input: G*, K,* Q = [Q1*, ..., Q*K]
|
237 |
+
Output: Y, Z, Π, Θ*, β, α* Initialize Y, Z with 2 while Iteration < Iteration max ∧ Not Converge do Estimate ylk with 8 Compute ylk with 10 Compute α k q with 19 Compute π k qw with 20 Compute θ k i with 18 Compute τ k qw with 21 end while The parameters that maximize Iθ(RAk ) are derived directly from the previously presented formula. To ensure that the vector αk and matrix Πksatisfy the constraints Pq α k q = 1 and 0 ≤ πqw ≤ 1, ∀q, w ∈ {1*, ..., Q*k}
|
238 |
+
2, Lagrange multipliers are employed as follows:
|
239 |
+
|
240 |
+
ˆθ k i = Pl∈LkPj Alij Pl∈LkPi,j∈q Alij τ k iq αˆ k q = X i πˆ k qw = Pl∈LkPi̸=j τ k iqτ k jwAlij Pl∈LkPi̸=j τ k iqτ k jw τˆ k iq ∝ αˆ k q Y l∈Lk Y i̸=j Y w hπˆ k qw A l ij + (1 − πˆ k qw) (1−A l ij )iτˆ k
|
241 |
+
$\left(18\right)^3$
|
242 |
+
$\left(19\right)^3$
|
243 |
+
$\left(21\right)^{2}$
|
244 |
+
N(19)
|
245 |
+
$\left(20\right)^3$
|
246 |
+
jw (21)
|
247 |
+
where αˆ
|
248 |
+
k, Θˆ k, Πˆ k, τˆ
|
249 |
+
k are the best current parameters for the group k. Due to the interdependence between Πˆ k and τˆ
|
250 |
+
k, an effective way to determine the best estimation is to alternate between updating Πˆ k and τˆ
|
251 |
+
k iteratively until convergence. The optimized parameters define the distribution of DCSBM and vertex-to-block assignments for a group k. The same computation is executed for each group independently. The overall method is summarized in the algorithm 1.
|
252 |
+
|
253 |
+
## 4.3 Initialization Model
|
254 |
+
|
255 |
+
The initialization process of MDCSBM involves setting up values to layer-to-group variables Y and vertex-toblock variables Z. Effective initialization of these assignment variables contributes to faster convergence and a higher chance of recovering accurate ground truth values. In the context of mixture models, the K-means algorithm is commonly employed for initializing assignment variables due to its simplicity and quick response.
|
256 |
+
|
257 |
+
In this paper, we introduce a novel spectral technique that computes layer-to-group and vertex-to-block variables, such that clustering results are used as an initialization for inferring the MDCSBM model.
|
258 |
+
|
259 |
+
Consider U = {U1, U
|
260 |
+
2*, ...,* U
|
261 |
+
K} set of centroid graphs, with each graph U
|
262 |
+
k being the centroid that represents the group k . We aim to find layer-to-group variables by optimizing centroids that best represent each group.
|
263 |
+
|
264 |
+
Then, each centroid U
|
265 |
+
k gets clustered into Qk community to initialize the vertex-to-block assignment variable.
|
266 |
+
|
267 |
+
One way to find the communities of centroid k is to ensure that it is composed of Qk disconnected components, where each component corresponds to a community in the graph. In network theory, a graph with a Qk component exhibits a multiplicity of Qk null eigenvalues in its corresponding Laplacian matrix. These null eigenvalues are the smallest eigenvalues of the Laplacian matrix. Thus, minimizing the Qksmallest eigenvalue of the Laplacian matrix associated with U
|
268 |
+
kfacilitates the formation of Qk disconnected components within the centroid. Therefore, the model aiming to optimize these representations can be formulated as follows:
|
269 |
+
|
270 |
+
$$\begin{split}&\min_{\mathcal{U}^{k},\mathcal{U}^{k},\mathbf{F},\mathbf{Y}}\sum_{l=1}^{L}\sum_{k=1}^{K}y_{lk}||\mathcal{U}^{k}-\mathcal{A}^{l}||_{F}^{2}+2\lambda\sum_{k=1}^{K}Tr(\mathbf{F}^{\mathbf{k}^{T}}\mathbf{L}_{U^{k}}\mathbf{F}^{\mathbf{k}})\\ &s.t\ \ \forall i,u_{ij}^{k}\geq0,\mathbf{1}^{T}\mathbf{u}_{i}^{k}=1,\forall k,\\ &(\mathbf{F}^{\mathbf{k}})^{\mathsf{T}}\mathbf{F}^{\mathbf{k}}=\mathbf{I},y_{lk}\in\{0,1\},\sum_{k=1}^{K}y_{lk}=1\end{split}\tag{22}$$
|
271 |
+
$$\quad(23)$$
|
272 |
+
|
273 |
+
where ||.||2F denote the Frobenius norm, and u k ij represents element of the centroid U
|
274 |
+
k, where ∀*i, j* ∈ V . The Laplacian representation of centroid U
|
275 |
+
kis denoted by LUk , and F
|
276 |
+
krepresents an embedding vector. The Laplacian matrix is computed in its unnormalized version as follows:
|
277 |
+
|
278 |
+
$${\cal L}_{{\mathcal U}^{k}}={\cal D}_{{\mathcal U}^{k}}-{\mathcal U}^{k}$$
|
279 |
+
LUk = DUk − Uk(23)
|
280 |
+
where DUk denotes the degree matrix, a diagonal matrix. F
|
281 |
+
k ∈ RN×Qkhelps to get the number of connected components in the graph. The embedding vector was included in the cost function to relax the non-linear constraint of constructing a centroid with Qk disconnected components that effectively represent the communities, leveraging the number of null eigenvalues corresponding to the number of components in the graph. The optimization process for this model is described in the supplementary materials. Experimentally, this initialization helps the MDCSBM to converge faster than random initialization, up to more than 30 times faster, which generally depends on the data structure.
|
282 |
+
|
283 |
+
## 4.4 Optimization Of Initialization Model
|
284 |
+
|
285 |
+
The initialization model described in Section 4.3 in Equation (21) involves multiple variables, making it challenging to optimize them simultaneously. Therefore, we adopt an iterative technique where each variable will be optimized while the others are held fixed.
|
286 |
+
|
287 |
+
## 4.4.1 Optimizing Y, While U And F Are Fixed
|
288 |
+
|
289 |
+
The model can be represented as follows:
|
290 |
+
|
291 |
+
$$\min_{\mathbf{Y}}\sum_{l=1}^{L}\sum_{k=1}^{K}y_{lk}||\mathcal{U}^{k}-\mathcal{A}^{l}||_{F}^{2}$$ $$s.t\ \ y_{lk}\in\{0,1\},\sum_{k=1}^{K}y_{lk}=1$$
|
292 |
+
$$(24)$$
|
293 |
+
$$(26)$$
|
294 |
+
|
295 |
+
By relaxing the constraint ylk ∈ {0, 1} to ylk ∈ [0, 1], the model becomes linear, facilitating the application of analytical solutions that satisfy the Karush-Kuhn-Tucker (KKT) conditions using the Lagrange technique.
|
296 |
+
|
297 |
+
The analytical solution is expressed as follows:
|
298 |
+
|
299 |
+
follows: $$y_{lk}=\frac{||\mathcal{U}^{k}-\mathcal{A}^{l}||_{F}^{2}}{\sum_{k'=1}^{K}||\mathcal{U}^{k'}-\mathcal{A}^{l}||_{F}^{2}}$$ In the above, it will be considered to be the set of all
|
300 |
+
$$(25)$$
|
301 |
+
The determination of the group to which the layer l will be assigned is carried out as follows:
|
302 |
+
ylk = argmax k ylk (26)
|
303 |
+
|
304 |
+
## 4.4.2 Optimizing U, While Y **And F Are Fixed**
|
305 |
+
|
306 |
+
Firstly, the optimization of centroids U
|
307 |
+
kis performed independently, and according to Wang et al. (2020),
|
308 |
+
the objective function T r(F
|
309 |
+
k TLUk F
|
310 |
+
k) can be expressed as Pi, j||fi − fj||22*ui, j*. Therefore, the optimization for each centroid can be formulated as follows:
|
311 |
+
|
312 |
+
$$\operatorname*{min}_{\mathcal{U}^{k}}\sum_{l\in L^{k}}\sum_{i,j}||u_{i j}-A_{i j}||_{F}^{2}-\lambda\sum_{i,j}||\mathbf{f}_{i}-\mathbf{f}_{j}||_{2}^{2}u_{i,j}$$ $$s.t\ \ u_{i j}\geq1,\mathbf{1}^{T}.\mathbf{u}_{i}=1$$
|
313 |
+
|
314 |
+
$$(27)$$
|
315 |
+
|
316 |
+
Lk represent the set of layers for which ylk equals one. We denote ||fi − fj||22 as dij. Due to the independence of optimization for each vertex vector ui from the others, the optimization of U
|
317 |
+
kcan be formulated as follows:
|
318 |
+
|
319 |
+
$$\operatorname*{min}_{\mathbf{u}_{i}^{k}}\sum_{l\in L^{k}}||\mathbf{u}_{i}-{\frac{\lambda}{2|L^{k}|}}\mathbf{d}_{i}||_{F}^{2}$$ $$s.t\ \ \forall i,j\ u_{i j}\geq1,\mathbf{1}^{T}.\mathbf{u}_{i}=1$$
|
320 |
+
$$(28)$$
|
321 |
+
|
322 |
+
The model mentioned above is quadratic with linear constraints, indicating that it is convex. It can resolved using the augmented Lagrange multipliers. Otherwise, any quadratic solver can efficiently resolve this problem.
|
323 |
+
|
324 |
+
## 4.4.3 Optimizing Fk **While U And Y Are Fixed**
|
325 |
+
|
326 |
+
The optimization of each F
|
327 |
+
kfor every group is performed independently of the remaining groups. For a given group, the model can be formulated as follows:
|
328 |
+
|
329 |
+
$$\begin{array}{r}{\operatorname*{min}_{\mathbf{F}^{k}}T r(\mathbf{F}^{k^{T}}L_{{\mathcal{U}}^{k}}\mathbf{F}^{k})}\\ {s.t\ \ \mathbf{F}^{k^{T}}.\mathbf{F}^{k}=\mathbf{I}}\end{array}$$
|
330 |
+
|
331 |
+
$$(29)$$
|
332 |
+
|
333 |
+
The optimal F
|
334 |
+
kcan be obtained by extracting Qkeigenvectors of the Laplacian matrix LUk , which are associated with the smallest Qkeigenvalues. It is important to note that Qk denotes the number of communities within group k.
|
335 |
+
|
336 |
+
Algorithm 2 Multi Centroids algorithm initialization Input G*, K,* Q = [Q1*, ..., Q*K] **Output**: U, Y
|
337 |
+
Initialize U = {U1, U
|
338 |
+
2*, ...,* U
|
339 |
+
K}
|
340 |
+
while Iteration < Iteration max || Not Converge do optimize ylk with 26 optimize ui with 28 compute F
|
341 |
+
kfrom the eigen vector associated to Qksmallest eigen value of LUk end while
|
342 |
+
|
343 |
+
## 5 Experiments
|
344 |
+
|
345 |
+
To assess the properties of the Mixture Degree-Corrected Stochastic Block Model (MDCSBM), we evaluated its performance across various datasets. This analysis includes real-world data on brain connectivity from cerebral imaging and reality mining data concerning proximity interactions among students within a university.
|
346 |
+
|
347 |
+
Additionally, we utilized multiple synthetic data partitions to explore the limits of MDCSBM and assess its scalability in handling large datasets.
|
348 |
+
|
349 |
+
We conducted comparative analyses of MDCSBM against several established algorithms, including the multiplex DCSBM, Generalized Louvain Mucha et al. (2010), and Graph Fusion Spectral Clustering (GFSC)
|
350 |
+
Kang et al. (2019). These algorithms are designed to optimize partitions within multilayer graphs. To quantitatively evaluate the performance of each algorithm, we employed metrics such as Average Normalized Mutual Information (NMI) and Average Adjusted Mutual Information (AMI).
|
351 |
+
|
352 |
+
## 5.1 Real Data Set 5.1.1 Brain Connectivity From Multiplex Graph Representation
|
353 |
+
|
354 |
+
To assess the relevance of the MDCSBM in real applications, we conducted a study focusing on brain connectivity using diffusion Magnetic Resonance Imaging (dMRI)1. We utilized the HNU1 dataset, which includes 300 undirected graphs derived from 10 brain-scanning sessions across 30 individuals Zuo et al. (2015).
|
355 |
+
|
356 |
+
Each graph contains 200 vertices representing different regions of the brain, with edges indicating the observed neural connections between these regions. The dataset treats each graph as an independent observation, aligning with methodologies from prior studies Mantziou et al. (2023); Arroyo et al. (2020).
|
357 |
+
|
358 |
+
Our primary goal was to detect groups of subjects exhibiting similar brain connectivity patterns and to identify communities within these regions. Considering the brain's division into two hemispheres, each graph inherently comprises two blocks. However, inter-subject scan variations reflect the unique neural states of the individuals.
|
359 |
+
|
360 |
+
The MDCSBM was tasked with recognizing subjects with closely related brain scans, accounting for each hemisphere. The model was initialized to distinguish 30 groups, corresponding to the number of subjects, with two blocks per group reflecting the hemispheres. Using the centroid method for initialization, the model consistently identified two blocks for each snapshot across 100 runs, which supports neuroscientific evidence of significant hemispherical independence. However, it also tended to consolidate subjects into 27 distinct groups from the 300 graphs.
|
361 |
+
|
362 |
+
Remarkably, the model effectively grouped each subject's graphs within the identified clusters. Notably, it coalesced subjects 8 and 23, 11 and 14, and 10 and 28 into the same groups, as detailed in table 1. The grouping of subjects 11 and 14 is consistent with findings from a semi-supervised study Arroyo et al. (2020). Our unsupervised application of the MDCSBM algorithm produced satisfactory outcomes, demonstrating the model's efficacy in navigating complex and unsupervised domains.
|
363 |
+
|
364 |
+
This implementation underscores the MDCSBM's potential for facilitating unsupervised exploration within sophisticated fields, offering robust insights into the underlying patterns of brain connectivity.
|
365 |
+
|
366 |
+
![10_image_0.png](10_image_0.png)
|
367 |
+
|
368 |
+
Table 1: The table shows the cluster obtained for each image of each subject by the MDCSBM algorithm.
|
369 |
+
|
370 |
+
Each of the 30 existing subjects has ten images, which are presented in 10 layers.
|
371 |
+
|
372 |
+
![11_image_0.png](11_image_0.png)
|
373 |
+
|
374 |
+
Figure 2: The figure illustrates the outcomes of the Multi-Group clustering applied to the reality mining distance proximity dataset. Panel (a) depicts the total number of days represented in each group. Panel (b)
|
375 |
+
details the monthly distribution of days for each group. Panel (c) presents the community structures within a layer from the group 0. Finally, panel (d) displays the BIC values relative to the number of groups.
|
376 |
+
|
377 |
+
## 5.1.2 Reality Mining Study
|
378 |
+
|
379 |
+
In this second study, we analyze a physical proximity graph derived from the reality mining study conducted by Eagle and Pentland Eagle & (Sandy), featuring a group of college students and faculty. This dataset comprises undirected graphs representing interactions among 96 students from the Massachusetts Institute of Technology, collected over a nine-month period. Participants were equipped with mobile phones installed with special software that recorded close proximity encounters using Bluetooth technology. Therefore, each layer of this multiplex graph represents a day, with edges indicating at least one observed proximity encounter between individuals on that day. The dataset thus includes L = 234 layers, each with |V | = 96 vertices.
|
380 |
+
|
381 |
+
Figure 2 illustrates the results from the MDCSBM model applied to this dataset. Sub-figure (d) displays the BIC values relative to the number of groups. We identified five distinct groups, with two communities in groups 0 and 2, and one community in groups 1, 3, and 4. The single community structure in groups 1, 3, and 4 is attributable to the high sparsity of layers within these groups. In contrast, groups like 0 exhibit dual communities as depicted in sub-figure (c), which shows a layer from group 0.
|
382 |
+
|
383 |
+
Sub-figures (a) and (b) reveal discernible patterns within these groups. Notably, group 3 (colored red)
|
384 |
+
encompasses layers from weekends and holidays, particularly the Christmas period in December. The other groups capture monthly variations; for instance, group 0 (colored blue) corresponds to the start of the semester, characterized by denser layers. Group 1 (colored orange) represents mid-year, aligning with academic examinations and displaying sparser community interactions. Groups 2 and 4 (colored green and purple, respectively) reflect the end-of-year activities, showing varied layer densities. These interpretations are consistent with typical university schedules, where students primarily interact with classmates during academic sessions but engage with a broader range of acquaintances outside of class times. Sub-figures (a) and (b) reveal discernible patterns within these groups, particularly for groups 0 and 2, which each contain two communities. From both sub-figures, it is evident that group 0, represented in blue, consists of days of the week when courses are likely to occur in the first part of the year. This group is characterized by denser layers, indicating that students are taking two different courses. Conversely, group 2, represented in green, functions similarly to group 0 but pertains to the second part of the year, again involving students taking two different courses. This explains the presence of two communities within these groups. Group 3, colored red, encompasses layers from weekends and holidays, with a notable focus on the Christmas period in December. Other groups capture monthly variations; for instance, group 1, colored orange, represents the mid-year period, which aligns with academic examinations and displays sparser community interactions.
|
385 |
+
|
386 |
+
Group 4, colored purple, reflects end-of-year activities, showing varied layer densities. These interpretations align with typical university schedules, where students primarily interact with classmates during academic sessions but engage with a broader range of acquaintances outside of class times.
|
387 |
+
|
388 |
+
This experiment demonstrates the effectiveness of the DCMSBM in real-world applications, adeptly handling complex data structures and offering insightful interpretations of dynamic social interactions.
|
389 |
+
|
390 |
+
## 5.2 Synthetic Data 5.2.1 Variability In Block Size
|
391 |
+
|
392 |
+
In this experiment, we aim to assess each group's sensitivity to block size. Therefore, the dataset is composed
|
393 |
+
|
394 |
+
![12_image_0.png](12_image_0.png)
|
395 |
+
|
396 |
+
of 3 groups. Each group of layers consists of 10 graphs, each with 100 vertices organized into four blocks.
|
397 |
+
|
398 |
+
Vertices inside a block are randomly linked to each other with a probability πintra = 0.5, and vertices from different blocks are randomly connected with a probability πinter = 0.3. What distinguishes the groups of layers is the number of vertex in each block, G1 = {25, 25, 25, 25}, G2 = {20, 25, 25, 30} and G3 = {30, 30, 20, 20},
|
399 |
+
where Giis the i th group, and for each group, the first number indicates the number of vertex of the first block, the second number the one of the second block, and so forth, as shown in figure 3.
|
400 |
+
|
401 |
+
Figure 3: The adjacency matrices correspond to a single layer for each group. G1is presented on the right, G2in the middle, and G3 on the left. The intra-block density is set at πintra = 0.5. For better visualization, the inter-block connectivity probability is set at πinter = 0.1, instead of πinter = 0.3 as an experiment.
|
402 |
+
The actor-based clustering algorithm on multiplex graphs yields an average clustering of vertices. In contrast to MDCSBM, there is no differentiation between blocks of vertices for each group of layers, contributing to the superior NMI/AMI results of MDCSBM over other multiplex algorithms, as shown in Table 2. The mean errors between the predicted and generated parameters of MDCSBM are mean*ErroP aramMDCSBM* = 0.04, affirming the improved parameter recovery of MDCSBM compared to the multiplex DCSBM, which results in an error of mean*ErroP aramMultiDCSBM* = 0.67.
|
403 |
+
|
404 |
+
| MDCSBM | DCSBM | GLouvain | GFSC | |
|
405 |
+
|----------|---------|------------|--------|-------|
|
406 |
+
| NMI | 100 | 61.40 | 77.44 | 73.80 |
|
407 |
+
| AMI | 100 | 58.74 | 75.74 | 72.65 |
|
408 |
+
|
409 |
+
Table 2: The NMI and AMI performances on MDCSBM, multiplex DCSBM, Glouvain and GFSC in variability on block size synthetic datasets.
|
410 |
+
|
411 |
+
![13_image_0.png](13_image_0.png)
|
412 |
+
|
413 |
+
Figure 4: The performance of MDCSBM, DCSBM, Glouvain, and GFSC to find the clusters of vertices regarding the number of layers for each group. The NMI and AMI were used as metrics of performance.
|
414 |
+
|
415 |
+
## 5.2.2 Variability In Block Distribution
|
416 |
+
|
417 |
+
In this experiment, we aim to assess the model's performance regarding variation in block distribution.
|
418 |
+
|
419 |
+
Therefore, the dataset for this experiment comprises three groups, each containing ten layers. Each graph consists of 100 vertices distributed across four equally sized blocks ({25, 25, 25, 25}) with the same intracommunity probability (π*intra* = 0.5). The distinction between layer groups lies in the probability of having an edge between blocks, denoted as π G1 inter = 0.1, π G2 inter = 0.3, and π G3 inter = 0.5. This variability tests the algorithm's capability to identify groups with different Π distributions. Notably, the third group has πinter = π*intra*, resembling a random graph without communities.
|
420 |
+
|
421 |
+
MDCSBM accurately identifies vertex blocks for each group with an estimated parameter error of mean*ErroP aramMDCSBM* = 0.04. Additionally, the algorithm recognizes "1 community" for the layers without communities, consistent with a random graph. This result demonstrates the MDCSBM's ability to identify the noisy layer. In contrast, multiplex algorithms exhibit an average effect in the vertex-toblock assignment, especially multiplex DCSBM estimates the vertex-to-block with a higher estimation error
|
422 |
+
(mean*ErroP aramMultiDCSBM* = 0.71), as observed previously.
|
423 |
+
|
424 |
+
| MDCSBM | DCSBM | GLouvain | GFSC | |
|
425 |
+
|----------|---------|------------|--------|-------|
|
426 |
+
| NMI | 100 | 55.54 | 66.66 | 53.55 |
|
427 |
+
| AMI | 100 | 55.03 | 66.66 | 53.02 |
|
428 |
+
|
429 |
+
Table 3: The NMI and AMI performance for MDCSBM, multiplex DCSBM, Glouvain and GFSC in variability on block size synthetic datasets.
|
430 |
+
|
431 |
+
## 5.2.3 Variability In Number Of Layer
|
432 |
+
|
433 |
+
In this experiment, we test the scalability of the method regarding the number of layers. Therefore, we keep the number of vertices, groups, and the block distribution constant while we vary the number of layers for each group. Similar to Experiment 5.2.1, we define three groups with an equal number of layers but distinct block divisions for each group. Specifically, we set π*intra* = 0.5 and π*inter* = 0.3. The block distribution within each group consists of four blocks, as follows: G1 = {25, 25, 25, 25}, G2 = {20, 25, 25, 30}, and G3 = {30, 30, 20, 20},
|
434 |
+
where Girepresents the i th group.
|
435 |
+
|
436 |
+
The result of the following experiments is shown in the figure 4. We can see that the performance of the MDCSBM in retrieving the optimal blocks for each layer augments when the number of layers augments, compared to the other methods. It can be explained by the law of large numbers, which delineates the convergence in probability to the expected value as the number of samples—the layers in our case—increases.
|
437 |
+
|
438 |
+
As the layers in the multiplex graph augment, the computation time increases linearly due to the number of parameters scaling linearly with the number of layers, given a fixed number of groups and blocks. This contrasts with other tested algorithms, which do not scale well with large datasets.
|
439 |
+
|
440 |
+
## 5.2.4 Variability In Number Of Vertices
|
441 |
+
|
442 |
+
In this experiment, we assess the scalability of the method regarding the size of each graph. Therefore, we fix
|
443 |
+
|
444 |
+
![14_image_0.png](14_image_0.png)
|
445 |
+
|
446 |
+
the number of layers, the number of groups, and the block distribution of each group, and then we vary the number of vertices of the multiplex graph. We set three groups with an equilibrium of 10 layers for each and different block division such that G1 = {25%N, 25%N, 25%N, 25%N}, G2 = {20%N, 25%N, 25%N, 30%N},
|
447 |
+
and G3 = {30%N, 30%N, 20%N, 20%N}. The x%N means x percent from a number of vertices of the layer.
|
448 |
+
|
449 |
+
The intra-block distribution is π*intra* = 0.5 and the inter-distribution is π*inter* = 0.3.
|
450 |
+
|
451 |
+
In this experiment, we maintain a constant number of layers, groups, and block distribution for each group while varying the graph size by raising its number of vertices. Similar to the previous experiments in 5.2.1, we define three groups with an equal number of layers and distinct block divisions for each.
|
452 |
+
|
453 |
+
Figure 5: The performance of MDCSBM, DCSBM, Glouvain, and GFSC to find the clusters of vertices regarding the number of vertices in a multiplex graph. The NMI and AMI were used as metrics of performance.
|
454 |
+
|
455 |
+
The obtained result is represented in the figure 5. We can notice that the MDCSBM scales to a large dataset with thousands of nodes. The time complexity varies regarding the graph's size and the block's structure.
|
456 |
+
|
457 |
+
We study the complexity time of the MDCSBM in the supplementary materials.
|
458 |
+
|
459 |
+
## 5.2.5 Variability In The Number Of Groups
|
460 |
+
|
461 |
+
The objective of this experiment is to assess the method's sensitivity with respect to the number of groups
|
462 |
+
|
463 |
+
![14_image_1.png](14_image_1.png)
|
464 |
+
|
465 |
+
within the multiplex graph. Consequently, we keep the number of layers in the multiplex graph fixed at 20, then we vary the number of groups from 2 to 8, ensuring that each group has a distribution distinct from the others.
|
466 |
+
|
467 |
+
Figure 6: Performance of MDCSBM, DCSBM, GLouvain and GFSC when the number of groups varies in fixed number of layer.
|
468 |
+
Figure 6 shows the performance of the algorithms in this experiment. The MDCSBM performs better than the others in finding multigroup community detection. However, we can see that for a high number of groups, the performance of MDCSBM decreases; such a result may be explainable by the reduced number of layers for the high number of groups. We can see that this result will be enhanced when each group's layer number increases.
|
469 |
+
|
470 |
+
## 5.2.6 Sensitivity To Block Size
|
471 |
+
|
472 |
+
In this experiment, we aim to assess the ability of the method to perform in an unbalanced block size.
|
473 |
+
|
474 |
+
![15_image_0.png](15_image_0.png)
|
475 |
+
|
476 |
+
Therefore, we construct a multiplex graph with 20 layers and 2 groups, such that each group has 10 layers of 100 vertices for each, and each group has two blocks with the same distribution. We fix the size of blocks of one group to be 50 vertex per block, and for the other group, we change the block size from 10 to 50 vertex.
|
477 |
+
|
478 |
+
Figure 7: Performance of MDCSBM, DCSBM, GLouvain and GFSC when the size of block variate. The variate group has two blocks; the sensitivity is computed by the fraction between the size of the two blocks.
|
479 |
+
|
480 |
+
The performance of the algorithms in this experiment is presented in Figure 6. It is observed that MDCSBM
|
481 |
+
outperforms the others in multi-group community detection. However, with a high number of groups, the performance of MDCSBM decreases, potentially due to the reduced number of layers for the high number of groups. This result is expected to improve when the number of layers in each group increases.
|
482 |
+
|
483 |
+
## 5.2.7 Sensitivity To Group Size
|
484 |
+
|
485 |
+
This experiment aims to assess the model's performance when the groups have an unbalanced distribution of
|
486 |
+
|
487 |
+
![15_image_1.png](15_image_1.png)
|
488 |
+
|
489 |
+
layers. We consider a multiplex graph with two groups featuring distinct block distributions to achieve this.
|
490 |
+
|
491 |
+
The number of layers in the first group is fixed at 10, while the number of layers in the second group varies from 1 to 100. Each layer consists of 100 vertices.
|
492 |
+
|
493 |
+
Figure 8: Performance of MDCSBM, DCSBM, GLouvain, and GFSC for different numbers of layers in the group. It varies from 1 to 40 layers.
|
494 |
+
|
495 |
+
Figure 8 illustrates various algorithms' performance in this application. Compared to the others, the MDCSBM algorithm demonstrates notable stable performance, even in scenarios with high imbalance in the number of layers. This result is attributed to the effectiveness of the joint clustering approach, which we believe contributes to achieving accurate results.
|
496 |
+
|
497 |
+
## 5.2.8 Time Complexity
|
498 |
+
|
499 |
+
To illustrate the model's time complexity and scalability to large datasets, we present the time complexity of the model. To achieve this, we conduct two experiments where the time complexity is evaluated for varying numbers of layers and vertices. Figure 9 illustrates the time complexity for each case. Evidently, the time complexity scales linearly with the number of layers. This behavior is attributed to the increase in variables that linearly correlate with the number
|
500 |
+
|
501 |
+
![16_image_0.png](16_image_0.png)
|
502 |
+
|
503 |
+
((a)) ((b))
|
504 |
+
|
505 |
+
![16_image_1.png](16_image_1.png)
|
506 |
+
|
507 |
+
Figure 9: Complexity time regarding a number of layers and vertices. At the right, the time is counted when the number of layers augments. At the left, the time is counted when the number of vertices augments. The time is counted in seconds.
|
508 |
+
of layers. Additionally, the time complexity rises more rapidly with an increase in the number of vertices.
|
509 |
+
|
510 |
+
This outcome results from the significant growth of the combinatorial solution as the number of vertices increases. Despite this, the modeling approach maintains a reasonable time complexity in scenarios with high combinatorial solutions, owing to our efficient initialization process, which facilitates faster convergence compared to random initialization. Furthermore, our initialization accelerates convergence to a favorable local minimum, achieving speeds over 50 times faster than random initialization.
|
511 |
+
|
512 |
+
## 5.2.9 Model Selection
|
513 |
+
|
514 |
+
The defined MSBM model needs a prior knowledge of number of cluster. Ones can think about formulation that allows to optimize the number of cluster in the inference as presented in Roy et al. (2006); Amini et al.
|
515 |
+
|
516 |
+
(2024). Such models has been based on Chinese Restaurant Process (CRP) to determine the number of cluster. The CRP modeling in out of the scope of this paper, further enhancement will be reserved for future work. Here in this experiment, we show the results from using he BIC criteria to determine the number of cluster as defined in equation ??. It worth noting to remember that the BIC criteria determine the optimal number of blocks and groups by balancing the fit of the model to the data with a penalization of the model complexity.
|
517 |
+
|
518 |
+
In this experiments, we create a multiplex graph of 30 layers. Within this multiplex graph, we set 3 groups of 10 layers for each. We set π*intra* = 0.5 and π*inter* = 0.3 for all layers. The block distribution within each group consists of four blocks, as follows: G1 = {25, 25, 25, 25}, G2 = {20, 25, 25, 30}, and G3 = {30, 30, 20, 20},
|
519 |
+
where Girepresents the i th group.
|
520 |
+
|
521 |
+
![16_image_2.png](16_image_2.png)
|
522 |
+
|
523 |
+
Figure 10: The BIC metric for finding the optimal number of groups and block. At the left, the number of group is fixed to 3, then the BIC is computed for variate number of blocks. At the right, the number of block is fixed to 4, then the BIC is computed for variate number of group.
|
524 |
+
Figure 10 shows the variation of the Bic regarding the number of blocks when the number of groups is fixed to 3 at the left, and the variation of BIC regarding number of layers when the number of block is fixed to 4 at the right. We can clearly notice that the minimal value of BIC for both experiences indicates the optimal value of cluster as what is attending, showcasing its performance in toy example to define the number of clustering.
|
525 |
+
|
526 |
+
## 5.3 Real World Data
|
527 |
+
|
528 |
+
In our real-world experiments, we investigate two distinct datasets. The first dataset is derived from merging the BBC and BBCSport datasets, providing a diverse collection of information. The second dataset involves connectivity data obtained from human brain scans, specifically utilizing diffusion magnetic resonance imaging
|
529 |
+
(dMRI).
|
530 |
+
|
531 |
+
## 6 Conclusion
|
532 |
+
|
533 |
+
Throughout this paper, we have introduced the Mixture Degree-Corrected Stochastic Block Model (MDCSBM)
|
534 |
+
for multi-group community detection in multiplex graphs. The MDCSBM defines the existing groups that share a similar community structure. Therefore, for each identified group, a distinct DCSBM is derived to ascertain the community structure of each vertex. We have devised an Expectation-Maximization (EM)
|
535 |
+
framework for estimating layer-to-group assignment variables, followed by a Variational EM technique for estimating vertex-to-block assignments. A novel centroid methodology has been proposed to initialize layer-to-group and vertex-to-block variables, enhancing the model's convergence.
|
536 |
+
|
537 |
+
This model has been formulated to refine the estimation of the generating model underlying multiplex graphs. It significantly contributes to a better comprehension of community structures within multiplex graphs characterized by multigroups of community memberships. While the current presentation exclusively addresses unweighted graphs, potential extensions encompassing the incorporation of weights through alternative probability distributions such as Gaussian or Poisson distributions exist. Such extensions would undoubtedly enrich the model's applicability in capturing the intricacies of diverse real-world scenarios.
|
538 |
+
|
539 |
+
## References
|
540 |
+
|
541 |
+
Emmanuel Abbe. Community detection and stochastic block models: recent developments. The Journal of Machine Learning Research, 18(1):6446–6531, 2017.
|
542 |
+
|
543 |
+
Nazanin Afsarmanesh and Matteo Magnani. Finding overlapping communities in multiplex networks. *arXiv* preprint arXiv:1602.03746, 2016.
|
544 |
+
|
545 |
+
Melissa Ailem, François Role, and Mohamed Nadif. Co-clustering document-term matrices by direct maximization of graph modularity. In *Proceedings of the 24th ACM International on Conference on* Information and Knowledge Management, CIKM '15, pp. 1807–1810, New York, NY, USA, 2015. Association for Computing Machinery. ISBN 9781450337946. doi: 10.1145/2806416.2806639. URL https://doi.org/
|
546 |
+
10.1145/2806416.2806639.
|
547 |
+
|
548 |
+
Arash Amini, Marina Paez, and Lizhen Lin. Hierarchical stochastic block model for community detection in multiplex networks. *Bayesian Analysis*, 19(1):319–345, 2024.
|
549 |
+
|
550 |
+
Jesús Arroyo, Avanti Athreya, Joshua Cape, Guodong Chen, Carey E. Priebe, and Joshua T. Vogelstein.
|
551 |
+
|
552 |
+
Inference for multiple heterogeneous networks with a common invariant subspace, 2020.
|
553 |
+
|
554 |
+
Pierre Barbillon, Sophie Donnet, Emmanuel Lazega, and Avner Bar-Hen. Stochastic block models for multiplex networks: an application to a multilevel network of researchers. Journal of the Royal Statistical Society Series A: Statistics in Society, 180(1):295–314, 2017.
|
555 |
+
|
556 |
+
Punam Bedi and Chhavi Sharma. Community detection in social networks. Wiley Interdisciplinary Reviews:
|
557 |
+
Data Mining and Knowledge Discovery, 6:n/a–n/a, 02 2016. doi: 10.1002/widm.1178.
|
558 |
+
|
559 |
+
Michele Berlingerio, Fabio Pinelli, and Francesco Calabrese. Abacus: frequent pattern mining-based community discovery in multidimensional networks, 2013. URL https://arxiv.org/abs/1303.2025.
|
560 |
+
|
561 |
+
Peter Bickel, David Choi, Xiangyu Chang, and Hai Zhang. Asymptotic normality of maximum likelihood and its variational approximation for stochastic blockmodels. *The Annals of Statistics*, 41(4), August 2013.
|
562 |
+
|
563 |
+
ISSN 0090-5364. doi: 10.1214/13-aos1124. URL http://dx.doi.org/10.1214/13-AOS1124.
|
564 |
+
|
565 |
+
Alain Celisse, J. J. Daudin, and Laurent Pierre. Consistency of maximum-likelihood and variational estimators in the stochastic block model, 2012.
|
566 |
+
|
567 |
+
Marco Corneli. *Dynamic stochastic block models, clustering and segmentation in dynamic graphs*. Theses, Université Panthéon-Sorbonne - Paris I, November 2017. URL https://theses.hal.science/tel-01926276.
|
568 |
+
|
569 |
+
Marco Corneli, Pierre Latouche, and Fabrice Rossi. Exact icl maximization in a non-stationary temporal extension of the stochastic block model for dynamic networks. *Neurocomputing*, 192:81–91, June 2016.
|
570 |
+
|
571 |
+
ISSN 0925-2312. doi: 10.1016/j.neucom.2016.02.031. URL http://dx.doi.org/10.1016/j.neucom.2016. 02.031.
|
572 |
+
|
573 |
+
Caterina De Bacco, Eleanor A. Power, Daniel B. Larremore, and Cristopher Moore. Community detection, link prediction, and layer interdependence in multilayer networks. *Physical Review E*, 95(4), April 2017. ISSN
|
574 |
+
2470-0053. doi: 10.1103/physreve.95.042317. URL http://dx.doi.org/10.1103/PhysRevE.95.042317.
|
575 |
+
|
576 |
+
Manlio De Domenico, Andrea Lancichinetti, Alex Arenas, and Martin Rosvall. Identifying modular flows on multilayer networks reveals highly overlapping organization in interconnected systems. *Phys. Rev.*
|
577 |
+
X, 5:011027, Mar 2015. doi: 10.1103/PhysRevX.5.011027. URL https://link.aps.org/doi/10.1103/
|
578 |
+
PhysRevX.5.011027.
|
579 |
+
|
580 |
+
Pasquale De Meo, Emilio Ferrara, Giacomo Fiumara, and Alessandro Provetti. Generalized louvain method for community detection in large networks. In 2011 11th international conference on intelligent systems design and applications, pp. 88–93. IEEE, 2011.
|
581 |
+
|
582 |
+
Vishnu Manasa Devagiri, Veselka Boeva, Shahrooz Abghari, Farhad Basiri, and Niklas Lavesson. Multi-view data analysis techniques for monitoring smart building systems. *Sensors*, 21(20), 2021. ISSN 1424-8220.
|
583 |
+
|
584 |
+
doi: 10.3390/s21206775. URL https://www.mdpi.com/1424-8220/21/20/6775.
|
585 |
+
|
586 |
+
Nathan Eagle and Alex (Sandy) Pentland. Reality Mining: Sensing complex social systems. Personal Ubiquitous Comput., 10(4):255–268, 2006.
|
587 |
+
|
588 |
+
Nada Elgendy and Ahmed Elragal. Big data analytics: A literature review paper. volume 8557, pp. 214–227, 08 2014. ISBN 978-3-319-08975-1. doi: 10.1007/978-3-319-08976-8_16.
|
589 |
+
|
590 |
+
Santo Fortunato. Community detection in graphs. *Physics Reports*, 486(3):75–174, 2010a. ISSN 0370-1573.
|
591 |
+
|
592 |
+
doi: https://doi.org/10.1016/j.physrep.2009.11.002. URL https://www.sciencedirect.com/science/
|
593 |
+
article/pii/S0370157309002841.
|
594 |
+
|
595 |
+
Santo Fortunato. Community detection in graphs. *Physics Reports*, 486(3-5):75–174, feb 2010b. doi:
|
596 |
+
10.1016/j.physrep.2009.11.002. URL https://doi.org/10.1016%2Fj.physrep.2009.11.002.
|
597 |
+
|
598 |
+
Zaynab Hammoud and Frank Kramer. Multilayer networks: aspects, implementations, and application in biomedicine. *Big Data Analytics*, 5, 07 2020. doi: 10.1186/s41044-020-00046-0.
|
599 |
+
|
600 |
+
Beibei Han, Yingmei Wei, Qingyong Wang, and Shanshan Wan. Dual adaptive learning multi-task multi-view for graph network representation learning. *Neural Networks*, 162:297–308, 2023.
|
601 |
+
|
602 |
+
Qiuyi Han, Kevin S. Xu, and Edoardo M. Airoldi. Consistent estimation of dynamic and multi-layer block models, 2015.
|
603 |
+
|
604 |
+
Zhao Kang, Guoxin Shi, Shudong Huang, Wenyu Chen, Xiaorong Pu, Joey Tianyi Zhou, and Zenglin Xu.
|
605 |
+
|
606 |
+
Multi-graph fusion for multi-view spectral clustering, 2019.
|
607 |
+
|
608 |
+
Brian Karrer and M. E. J. Newman. Stochastic blockmodels and community structure in networks. Physical Review E, 83(1), January 2011. ISSN 1550-2376. doi: 10.1103/physreve.83.016107. URL http://dx.doi.
|
609 |
+
|
610 |
+
org/10.1103/PhysRevE.83.016107.
|
611 |
+
|
612 |
+
Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori Ueda. Learning systems of concepts with an infinite relational model. In *Proceedings of the 21st National Conference on* Artificial Intelligence - Volume 1, AAAI'06, pp. 381–388. AAAI Press, 2006. ISBN 9781577352815.
|
613 |
+
|
614 |
+
Alec Kirkley and M. E. J. Newman. Representative community divisions of networks. *Communications* Physics, 5(1), February 2022. ISSN 2399-3650. doi: 10.1038/s42005-022-00816-3. URL http://dx.doi.
|
615 |
+
|
616 |
+
org/10.1038/s42005-022-00816-3.
|
617 |
+
|
618 |
+
Alec Kirkley, Alexis Rojas, Martin Rosvall, and Jean-Gabriel Young. Compressing network populations with modal networks reveal structural diversity. *Communications Physics*, 6(1), June 2023. ISSN 2399-3650.
|
619 |
+
|
620 |
+
doi: 10.1038/s42005-023-01270-5. URL http://dx.doi.org/10.1038/s42005-023-01270-5.
|
621 |
+
|
622 |
+
Patricio La Rosa, Terrence Brooks, Elena Deych, Berkley Shands, F. Prior, Linda Larson-Prior, and William Shannon. Gibbs distribution for statistical analysis of graphical data with a sample application to fcmri brain images. *Statistics in medicine*, 35, 11 2015. doi: 10.1002/sim.6757.
|
623 |
+
|
624 |
+
Clement Lee and Darren J. Wilkinson. A review of stochastic block models and extensions for graph clustering. *Applied Network Science*, 4(1), dec 2019. doi: 10.1007/s41109-019-0232-2. URL https:
|
625 |
+
//doi.org/10.1007%2Fs41109-019-0232-2.
|
626 |
+
|
627 |
+
Wenzhe Li, Sungjin Ahn, and Max Welling. Scalable mcmc for mixed membership stochastic blockmodels, 2015.
|
628 |
+
|
629 |
+
Yixuan Li, Kun He, Kyle Kloster, David Bindel, and John Hopcroft. Local spectral clustering for overlapping community detection. *ACM Transactions on Knowledge Discovery from Data (TKDD)*, 12(2):1–27, 2018.
|
630 |
+
|
631 |
+
Matteo Magnani, Obaida Hanteer, Roberto Interdonato, Luca Rossi, and Andrea Tagarelli. Community detection in multiplex networks. *ACM Computing Surveys (CSUR)*, 54(3):1–35, 2021.
|
632 |
+
|
633 |
+
Domenico Mandaglio, Alessia Amelio, and Andrea Tagarelli. Consensus community detection in multilayer networks using parameter-free graph pruning. In *Advances in Knowledge Discovery and Data Mining:*
|
634 |
+
22nd Pacific-Asia Conference, PAKDD 2018, Melbourne, VIC, Australia, June 3-6, 2018, Proceedings, Part III 22, pp. 193–205. Springer, 2018.
|
635 |
+
|
636 |
+
Anastasia Mantziou, Simon Lunagomez, and Robin Mitra. Bayesian model-based clustering for populations of network data, 2023.
|
637 |
+
|
638 |
+
Peter J. Mucha, Thomas Richardson, Kevin Macon, Mason A. Porter, and Jukka-Pekka Onnela. Community structure in time-dependent, multiscale, and multiplex networks. *Science*, 328(5980):876–878, 2010. doi:
|
639 |
+
10.1126/science.1184819. URL https://www.science.org/doi/abs/10.1126/science.1184819.
|
640 |
+
|
641 |
+
Mohamed Nadif and Gerard Govaert. Model-based co-clustering for continuous data. In 2010 Ninth International Conference on Machine Learning and Applications, pp. 175–180, 2010. doi: 10.1109/ICMLA.
|
642 |
+
|
643 |
+
2010.33.
|
644 |
+
|
645 |
+
Teng Niu, Shiai Zhu, Lei Pang, and Abdulmotaleb El Saddik. Sentiment analysis on multi-view social data.
|
646 |
+
|
647 |
+
In *MultiMedia Modeling: 22nd International Conference, MMM 2016, Miami, FL, USA, January 4-6,*
|
648 |
+
2016, Proceedings, Part II 22, pp. 15–27. Springer, 2016.
|
649 |
+
|
650 |
+
Evangelos E. Papalexakis, Christos Faloutsos, and Nicholas D. Sidiropoulos. Tensors for data mining and data fusion: Models, applications, and scalable algorithms. *ACM Trans. Intell. Syst. Technol.*, 8(2), oct 2016. ISSN 2157-6904. doi: 10.1145/2915921. URL https://doi.org/10.1145/2915921.
|
651 |
+
|
652 |
+
Subhadeep Paul and Yuguo Chen. Consistent community detection in multi-relational data through restricted multi-layer stochastic blockmodel. *Electronic Journal of Statistics*, 10(2), January 2016. ISSN 1935-7524.
|
653 |
+
|
654 |
+
doi: 10.1214/16-ejs1211. URL http://dx.doi.org/10.1214/16-EJS1211.
|
655 |
+
|
656 |
+
Xinyu Que, Fabio Checconi, Fabrizio Petrini, and John A Gunnels. Scalable community detection with the louvain algorithm. In *2015 IEEE International Parallel and Distributed Processing Symposium*, pp. 28–37. IEEE, 2015.
|
657 |
+
|
658 |
+
Daniel M Roy, Charles Kemp, Vikash Mansinghka, and Joshua Tenenbaum. Learning annotated hierarchies from relational data. In B. Schölkopf, J. Platt, and T. Hoffman (eds.), *Advances in Neural Information* Processing Systems, volume 19. MIT Press, 2006. URL https://proceedings.neurips.cc/paper_files/
|
659 |
+
paper/2006/file/663fd3c5144fd10bd5ca6611a9a5b92d-Paper.pdf.
|
660 |
+
|
661 |
+
Liangxun Shuo and Bianfang Chai. Discussion of the community detection algorithm based on statistical inference. *Perspectives in Science*, 7:122–125, 2016. ISSN 2213-0209. doi: https://doi.org/10.1016/j.pisc.
|
662 |
+
|
663 |
+
2015.11.020. URL https://www.sciencedirect.com/science/article/pii/S2213020915000658. 1st Czech-China Scientific Conference 2015.
|
664 |
+
|
665 |
+
Andrea Tagarelli, Alessia Amelio, and Francesco Gullo. Ensemble-based community detection in multilayer networks. *Data Mining and Knowledge Discovery*, 31:1506–1543, 2017.
|
666 |
+
|
667 |
+
Lei Tang, Xufei Wang, and Huan Liu. Community detection via heterogeneous interaction analysis. Data mining and knowledge discovery, 25:1–33, 2012.
|
668 |
+
|
669 |
+
Toni Vallè s-Català, Francesco A. Massucci, Roger Guimerà, and Marta Sales-Pardo. Multilayer stochastic block models reveal the multilayer structure of complex networks. *Physical Review X*, 6(1), mar 2016. doi:
|
670 |
+
10.1103/physrevx.6.011036. URL https://doi.org/10.1103%2Fphysrevx.6.011036.
|
671 |
+
|
672 |
+
Jan van der Laan, Edwin de Jonge, Marjolijn Das, Saskia Te Riele, and Tom Emery. A Whole Population Network and Its Application for the Social Sciences. *European Sociological Review*, 39(1):145–160, 06 2022.
|
673 |
+
|
674 |
+
ISSN 0266-7215. doi: 10.1093/esr/jcac026. URL https://doi.org/10.1093/esr/jcac026.
|
675 |
+
|
676 |
+
Hao Wang, Yan Yang, and Bing Liu. Gmc: Graph-based multi-view clustering. *IEEE Transactions on* Knowledge and Data Engineering, 32(6):1116–1129, 2019.
|
677 |
+
|
678 |
+
Hao Wang, Yan Yang, and Bing Liu. Gmc: Graph-based multi-view clustering. *IEEE Transactions on* Knowledge and Data Engineering, 32(6):1116–1129, 2020. doi: 10.1109/TKDE.2019.2903810.
|
679 |
+
|
680 |
+
James D. Wilson, John Palowitch, Shankar Bhamidi, and Andrew B. Nobel. Community extraction in multilayer networks with heterogeneous community structure, 2017.
|
681 |
+
|
682 |
+
Tianyu Xia, Yijun Gu, and Dechun Yin. Research on the link prediction model of dynamic multiplex social network based on improved graph representation learning. *IEEE Access*, 9:412–420, 2020.
|
683 |
+
|
684 |
+
Jean-Gabriel Young, Alec Kirkley, and M. E. J. Newman. Clustering of heterogeneous populations of networks. *Physical Review E*, 105(1), January 2022. ISSN 2470-0053. doi: 10.1103/physreve.105.014312.
|
685 |
+
|
686 |
+
URL http://dx.doi.org/10.1103/PhysRevE.105.014312.
|
687 |
+
|
688 |
+
Xi-Nian Zuo, Jeffrey Anderson, Pierre Bellec, Rasmus Birn, Bharat Biswal, Janusch Blautzik, John Breitner, Randy Buckner, Vince Calhoun, Francisco Castellanos, Antao Chen, Bing Chen, Jiangtao Chen, Xu Chen, Stanley Colcombe, William Courtney, Cameron Craddock, Adriana Di Martino, Haoming Dong, and Michael Milham. An open science resource for establishing reliability and reproducibility in functional connectomics. *Scientific Data*, 1, 09 2015. doi: 10.1038/sdata.2014.49.
|
689 |
+
|
690 |
+
Katharina A. Zweig. *Network Representations of Complex Systems*, pp. 109–148. Springer Vienna, Vienna, 2016. ISBN 978-3-7091-0741-6. doi: 10.1007/978-3-7091-0741-6_5. URL https://doi.org/10.1007/
|
691 |
+
978-3-7091-0741-6_5.
|
692 |
+
|
693 |
+
## A Identifiability
|
694 |
+
|
695 |
+
The identifiability of the parameters for uni layer Bernoulli SBM has been proved in Celisse et al. (2012).
|
696 |
+
|
697 |
+
The proof has been extended to a multiplex graph for pillar division Barbillon et al. (2017). We extend this analysis for multiplex DCSBM with multi groups.
|
698 |
+
|
699 |
+
Theorem A.1. Let assume that there is K *groups and every group has the same number of blocks* Qk = Qk
|
700 |
+
′=
|
701 |
+
Q ∀k, k′ ∈ {1*, ..., Q*}
|
702 |
+
2. Assume for any q ∈ {1, ..., Q}, k ∈ {1*, ..., K*}, α k q > 0, βk > 0*. Let* Π ∈]0, 1[K∗Q×K∗Q
|
703 |
+
diagonal block that contains matrices Πk *at diagonal as follow:*
|
704 |
+
|
705 |
+
$$\begin{bmatrix}\mathbf{\Pi}^{1}&\dots&0\\ \vdots&\dots&\vdots\\ 0&\dots&\mathbf{\Pi}^{K}\end{bmatrix}$$
|
706 |
+
|
707 |
+
Let also α be a K ∗ Q × K ∗ Q *matrix, which is the diagonilization of* [α 1 1
|
708 |
+
, ...α1Q, ...αK
|
709 |
+
Q ] vector, and β *be a* K ∗ Q × K ∗ Q *matrix, which is the diagonilization of* [β 1, ...β1, β2...βK] *vector, where* β iis repeated Q *times,*
|
710 |
+
∀i ∈ {1, ..., K}. Assume that the elements of r = Π**.α.β** *are distinct. Then the MDCSBM parameters are* identifiable under equivalent solutions.
|
711 |
+
|
712 |
+
Proof. We consider the degree node heterogeneity parameter constant and extend the proof from Celisse et al. (2012) to the MDCSBM model as follows. For any group k, rq,k is the probability for a giving member from block q in group k to have a connection with another in the same group rq,k =PQ
|
713 |
+
l=1 βkπ k qlα k l
|
714 |
+
. Let R be Q ∗ K square matrix such that R*i,q,k* = (rq,k)
|
715 |
+
ifor i ∈ 0*, ..., Q* ∗ k − 1. R is a Vandermonde matrix that is invertible by assumptions.
|
716 |
+
|
717 |
+
Therefore, for any i = 0*, ...,*(2Q − 1) ∗ K, let set
|
718 |
+
|
719 |
+
µi = X q,k αq,k(rq,k) i(30)
|
720 |
+
and M is a k(Q + 1) × KQ matrix such that
|
721 |
+
|
722 |
+
$$(30)$$
|
723 |
+
$$(31)$$
|
724 |
+
$$M_{i j}=\mu_{i+j}$$
|
725 |
+
Mij = µi+j (31)
|
726 |
+
For any i = 0*, ..., Q* ∗ k, we define the Q*K square matrix Mi by removing line i from the matrix. In hence,
|
727 |
+
|
728 |
+
$$M^{Q}=R\alpha R^{T}$$
|
729 |
+
$$(33)$$
|
730 |
+
$$(34)^{\frac{1}{2}}$$
|
731 |
+
T(32)
|
732 |
+
where α is Q∗K matrix as defined previously, where all α k q = 0 ̸ . Because R being invertible, then det(M) > 0.
|
733 |
+
|
734 |
+
Let us define
|
735 |
+
|
736 |
+
$$B(X,\theta)=\sum_{i=0}^{Q\times K}(-1)^{i+Q*K}det(M^{i}(\theta))X^{i}$$ $$(1-(2)-(-(2))Q)\quad\mbox{th}$$
|
737 |
+
|
738 |
+
B is of degree Q × K. For V
|
739 |
+
|
740 |
+
$$(1,r_{i}(\theta),...,(r_{i}(\theta))^{Q}),{\mathrm{~then~}}$$
|
741 |
+
$$B(r_{i}(\theta),\theta)=d e t(M(\theta),V_{i}(\theta))$$
|
742 |
+
B(ri(θ), θ) = det(M(θ), Vi(θ)) (34)
|
743 |
+
The column of M are linearly combinations of Vi, then B(ri(θ), θ) = 0 for any i. It means that B can be factorized as follow:
|
744 |
+
|
745 |
+
$$B(x,\theta)=d e t(M^{Q\times K})\prod_{i=0}^{K Q-1}\left(x-r_{i}(\theta)\right)$$
|
746 |
+
|
747 |
+
Let assume the θ = (Π*, α, β*) and θ
|
748 |
+
′ = (Π′, α′, β′) are two sets of parameters such that for any multiplex G graph with multi-group model, L(G; θ) = L(G; θ
|
749 |
+
′). Therefore, we get, µi(θ) = µi(θ
|
750 |
+
′), that means that Mi(θ) = Mi(θ
|
751 |
+
′) for any i. The B(; θ) = B(; θ
|
752 |
+
′) because it dependents on the determinant of M, which leads to ri(θ) = ri(θ
|
753 |
+
′). Ths R(θ) = R(θ
|
754 |
+
′), and
|
755 |
+
|
756 |
+
$$\mathbf{\alpha}(\theta)=(R(\theta)^{T})^{-1}M^{Q,K}R(\theta)=\mathbf{\alpha}(\theta^{\prime})$$
|
757 |
+
|
758 |
+
Therefore α = α′. The same steps can be applied to proof the identifiability of β where the matrix diagonal α is replaced by diagonal matrix of β where every βk, ∀k ∈ {1*, ..., K*} will be repeated Q times before set βk+1. It leads to a matrix with the same dimension Q × K.
|
759 |
+
|
760 |
+
$$(35)$$
|
761 |
+
$$(36)$$
|
762 |
+
For Π, let's define
|
763 |
+
$$U_{i j}=R(\mathbf{\theta})\beta(\mathbf{\theta})\mathbf{\alpha}(\mathbf{\theta})\Pi\mathbf{\alpha}(\mathbf{\theta})\beta(\mathbf{\theta})(R(\mathbf{\theta}))^{T}$$
|
764 |
+
From previously, $R(\theta)=R(\theta')$, $\alpha(\theta)=\alpha(\theta')$ and $\beta(\theta)=\beta(\theta')$ then ...
|
765 |
+
$$U(\mathbf{\theta})=U(\mathbf{\theta}^{\prime})\rightarrow\mathbf{\Pi}=\mathbf{\Pi}^{\prime}$$
|
766 |
+
$$(37)$$
|
767 |
+
′) → Π = Π′(37)
|
768 |
+
|
769 |
+
## B Consistency Of Maximum Likelihood
|
770 |
+
|
771 |
+
The asymptotic consistency of the maximum likelihood estimator of Bernoulli uni layer SBM has been studied in Celisse et al. (2012); Bickel et al. (2013). The proof of the consistency of MDCSBM is straightforward from the proof of MSBM model, which can derived from the proof of uni-layer SBM. Let us assume that the following assumptions hold:
|
772 |
+
Assumption B.1. *For every* q ̸= q
|
773 |
+
′,there exists w ∈ {1, ..., Qk} *such that* π k qw ̸= π k q
|
774 |
+
′w*, or* π k wq ̸= π k wq′
|
775 |
+
Assumption B.2. There exists ζ > 0 such that ∀(q, w) ∈ {1*, ..., Q*k}, π k qw ∈]0, 1[→ πqw ∈ [ζ, 1 − ζ]
|
776 |
+
Assumption B.3. There exists γ ∈ 1/Qksuch that ∀q ∈ {1*, ..., Q*k}, α k q ∈]0, 1[→ αq ∈ [γ, 1 − γ]
|
777 |
+
Assumption B.4. There exists ξ ∈ 1/K such that ∀k ∈ {1*, ..., K*}, β k ∈]0, 1[→ β k ∈ [ξ, 1 − ξ]
|
778 |
+
Theorem B.5. Let (Θ, d) and (Ψ, d′) *denote metric spaces and let* Mn : Θ × Ψ → R *be a random function* and M : Θ → R a deterministic function such that for every ϵ > 0
|
779 |
+
|
780 |
+
$$\begin{array}{c}{{s u p_{d(\theta,\theta_{0})}M(\theta)<M(\theta_{0})}}\\ {{s u p_{(\theta,\psi)\in\Theta\times\Psi}|M_{n}(\theta,\psi)-M(\theta)|:=||M_{n}-M||_{\Theta\times\Psi}\to0}}\end{array}$$
|
781 |
+
$$\begin{array}{l}{(38)}\\ {(39)}\end{array}$$
|
782 |
+
$$(40)^{\frac{1}{2}}$$
|
783 |
+
$$(41)$$
|
784 |
+
|
785 |
+
and (
|
786 |
+
ˆθψˆ) = *argmax* θ,ψ Mn(θ, ψ)*, then*
|
787 |
+
|
788 |
+
$$d({\hat{\theta}},\theta_{0})\rightarrow0$$
|
789 |
+
ˆ*θ, θ*0) → 0 (40)
|
790 |
+
The proof can be performed by the same steps by taking
|
791 |
+
|
792 |
+
$$\begin{array}{c}{{M_{n}(\pi,\alpha,\beta)=\frac{1}{N(N-1)L*K}}}\\ {{\sum_{l=1}^{L}log\Big(\sum_{k}\sum_{z_{|n|}}\beta_{k}\prod_{i\neq j}\textbf{Bernoulli}(\pi_{z_{i}^{k},z_{j}^{k}}^{k})\prod_{i}\alpha_{z_{i}^{k}}^{k}\Big)}}\\ {{M(\pi)=\max_{a_{i,j}\in A}\sum_{k}\sum_{q,w}\beta^{*k}\alpha_{q}^{*k}\alpha_{w}^{*k}}}\\ {{\sum_{q^{\prime},w^{\prime}}a_{q q^{\prime}}^{*k}a_{w w^{\prime}}^{*k}[\pi_{q,w}^{*k}log(\pi_{q^{\prime},w^{\prime}}^{k})+(1-\pi_{q,w}^{*k})log(1-\pi_{q^{\prime},w^{\prime}}^{*k})]}}\end{array}$$
|
793 |
+
$$(42)$$
|
794 |
+
|
795 |
+
where **Bernoulli**(π) is the Bernoulli distribution with parameter π, and β
|
796 |
+
∗, α∗*and π*∗ denote the true parameters respectively, with
|
797 |
+
|
798 |
+
$${\mathcal{A}}=\{(a_{i j}^{k})_{1\leq q,w\leq Q^{k}},a_{q w}^{k}\geq0,\sum_{w}a_{q w}^{k}=1\}$$
|
799 |
+
$$(43)$$
|
p9KSFrTLx0/p9KSFrTLx0_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 23,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 23,
|
14 |
+
"code": 0,
|
15 |
+
"table": 2,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 76,
|
18 |
+
"unsuccessful_ocr": 10,
|
19 |
+
"equations": 86
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|