HUANGYIFEI commited on
Commit
4309aa3
·
verified ·
1 Parent(s): dda250b

add readme.md

Browse files
Files changed (1) hide show
  1. Graph/GraphMAE_MQ9/README.md +38 -0
Graph/GraphMAE_MQ9/README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Graph Mask AutoEncoder(GraphMAE) on QM9 Dataset
2
+ ## Overview
3
+ We run the Graph Mask AutoEncoder on QM9 Dataset for pretraining. We use the atom position of each atom and the embedding of their element type as the input feature (dim=7) and predict the input feature by using the GraphSage with 4-dim hidden representation.
4
+ ## How to run
5
+ - step1. Preprocess the dataset (we have provided the preprocessed as well)
6
+
7
+ ```bash
8
+ python prepare_QM9_dataset.py --label_keys "mu" "gap"
9
+ ```
10
+ - step2. Train the Graph Mask AutoEncoder on the preprocessed dataset
11
+ ```bash
12
+ python run.py [--dataset_path] [--batch_size] [--epochs] [--device] [--save_dir]
13
+ ```
14
+
15
+ ## Model Description
16
+ ### Overview
17
+ Ref:**[GraphMAE](https://arxiv.org/abs/2205.10803)**
18
+ >Self-supervised learning (SSL) has been extensively explored in recent years. Particularly, generative SSL has seen emerging success in natural language processing and other AI fields, such as the wide adoption of BERT and GPT. Despite this, contrastive learning-which heavily relies on structural data augmentation and complicated training strategies-has been the dominant approach in graph SSL, while the progress of generative SSL on graphs, especially graph autoencoders (GAEs), has thus far not reached the potential as promised in other fields. In this paper, we identify and examine the issues that negatively impact the development of GAEs, including their reconstruction objective, training robustness, and error metric. We present a masked graph autoencoder GraphMAE that mitigates these issues for generative self-supervised graph pretraining. Instead of reconstructing graph structures, we propose to focus on feature reconstruction with both a masking strategy and scaled cosine error that benefit the robust training of GraphMAE. We conduct extensive experiments on 21 public datasets for three different graph learning tasks. The results manifest that GraphMAE-a simple graph autoencoder with careful designs-can consistently generate outperformance over both contrastive and generative state-of-the-art baselines. This study provides an understanding of graph autoencoders and demonstrates the potential of generative self-supervised pre-training on graphs.
19
+
20
+ ### Detail
21
+ Encoder & Decoder: Two layer [GraphSage](https://docs.dgl.ai/generated/dgl.nn.pytorch.conv.SAGEConv.html)
22
+
23
+ Readout Method: Mean
24
+
25
+ HiddenDims: 4 (Default)
26
+
27
+ MaskRate: 0.3 (Default)
28
+
29
+ Training on RTX 4060
30
+
31
+ ## Dataset Description
32
+ ### Overview
33
+ Ref: **[QM9](https://docs.dgl.ai/generated/dgl.data.QM9Dataset.html)**
34
+ > Type: Molecule property prediction
35
+ >
36
+ > Sample_num: 130831
37
+ >
38
+ > Total Elements: H,C,N,O,F