README.md 3.21 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# Graph Random Neural Network(GRAND)

This DGL example implements the GNN model proposed in the paper [Graph Random Neural Network for Semi-Supervised Learning on Graphs]( https://arxiv.org/abs/2005.11079).

Paper link: https://arxiv.org/abs/2005.11079

Author's code: https://github.com/THUDM/GRAND

Contributor: Hengrui Zhang ([@hengruizhang98](https://github.com/hengruizhang98))

## Dependecies
- Python 3.7
- PyTorch 1.7.1
- numpy
- dgl 0.5.3

## Dataset

The DGL's built-in Cora, Pubmed and Citeseer datasets. Dataset summary:

| Dataset | #Nodes | #Edges | #Feats | #Classes | #Train Nodes | #Val Nodes | #Test Nodes |
| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| Citeseer | 3,327 | 9,228 | 3,703 | 6 | 120 | 500 | 1000 |
| Cora | 2,708 | 10,556 | 1,433 | 7 | 140 | 500 | 1000 |
| Pubmed | 19,717 | 88,651 | 500 | 3 | 60 | 500 | 1000 |

## Arguments

###### Dataset options
```
--dataname          str     The graph dataset name.             Default is 'cora'.
```

###### GPU options
```
--gpu              int     GPU index.                          Default is -1, using CPU.
```

###### Model options
```
--epochs           int     Number of training epochs.             Default is 2000.
--early_stopping   int     Early stopping patience rounds.        Default is 200.
--lr               float   Adam optimizer learning rate.          Default is 0.01.
--weight_decay     float   L2 regularization coefficient.         Default is 5e-4.
--dropnode_rate    float   Dropnode rate (1 - keep probability).  Default is 0.5.
--input_droprate   float   Dropout rate of input layer.           Default is 0.5.
--hidden_droprate  float   Dropout rate of hidden layer.          Default is 0.5.
--hid_dim          int     Hidden layer dimensionalities.         Default is 32.
--order            int     Propagation step.                      Default is 8.
--sample           int     Sampling times of dropnode.            Default is 4.
--tem              float   Sharpening temperaturer.               Default is 0.5.
--lam              float   Coefficient of Consistency reg         Default is 1.0.
--use_bn           bool    Using batch normalization.             Default is False
```

## Examples

Train a model which follows the original hyperparameters on different datasets.
```bash
# Cora:
python main.py --dataname cora --gpu 0 --lam 1.0 --tem 0.5 --order 8 --sample 4 --input_droprate 0.5 --hidden_droprate 0.5 --dropnode_rate 0.5 --hid_dim 32 --early_stopping 100 --lr 1e-2  --epochs 2000
# Citeseer:
python main.py --dataname citeseer --gpu 0 --lam 0.7 --tem 0.3 --order 2 --sample 2 --input_droprate 0.0 --hidden_droprate 0.2 --dropnode_rate 0.5 --hid_dim 32 --early_stopping 100 --lr 1e-2  --epochs 2000

# Pubmed:
python main.py --dataname pubmed --gpu 0 --lam 1.0 --tem 0.2 --order 5 --sample 4 --input_droprate 0.6 --hidden_droprate 0.8 --dropnode_rate 0.5 --hid_dim 32 --early_stopping 200 --lr 0.2 --epochs 2000 --use_bn
```

### Performance

The hyperparameter setting in our implementation is identical to that reported in the paper.

| Dataset | Cora | Citeseer | Pubmed |
| :-: | :-: | :-: | :-: |
| Accuracy Reported(100 runs) | **85.4(±0.4)** | **75.4(±0.4)** | 82.7(±0.6) |
| Accuracy DGL(20 runs) | 85.33(±0.41) | 75.36(±0.36) | **82.90(±0.66)** |