README.md 2.63 KB
Newer Older
hbsun2113's avatar
hbsun2113 committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Inductive Representation Learning on Large Graphs (GraphSAGE)
============

- Paper link: [http://papers.nips.cc/paper/6703-inductive-representation-learning-on-large-graphs.pdf](http://papers.nips.cc/paper/6703-inductive-representation-learning-on-large-graphs.pdf)
- Author's code repo: [https://github.com/williamleif/graphsage-simple](https://github.com/williamleif/graphsage-simple). Note that the original code is 
simple reference implementation of GraphSAGE.

Requirements
------------
- requests

``bash
pip install requests
``


Results
-------

20
21
### Full graph training

hbsun2113's avatar
hbsun2113 committed
22
23
Run with following (available dataset: "cora", "citeseer", "pubmed")
```bash
24
python3 train_full.py --dataset cora --gpu 0    # full graph
hbsun2113's avatar
hbsun2113 committed
25
26
```

27
28
29
* cora: ~0.8330 
* citeseer: ~0.7110
* pubmed: ~0.7830
30

31
### Minibatch training
32
33
34

Train w/ mini-batch sampling (on the Reddit dataset)
```bash
35
python3 train_sampling.py --num-epochs 30       # neighbor sampling
36
python3 train_sampling.py --num-epochs 30 --inductive  # inductive learning with neighbor sampling
37
python3 train_sampling_multi_gpu.py --num-epochs 30    # neighbor sampling with multi GPU
38
python3 train_sampling_multi_gpu.py --num-epochs 30 --inductive  # inductive learning with neighbor sampling, multi GPU
39
40
python3 train_cv.py --num-epochs 30             # control variate sampling
python3 train_cv_multi_gpu.py --num-epochs 30   # control variate sampling with multi GPU
41
42
```

43
44
45
46
47
48
Accuracy:

| Model                 | Accuracy |
|:---------------------:|:--------:|
| Full Graph            | 0.9504   |
| Neighbor Sampling     | 0.9495   |
49
| N.S. (Inductive)      | 0.9460   |
50
| Control Variate       | 0.9490   |
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68

### Unsupervised training

Train w/ mini-batch sampling in an unsupervised fashion (on the Reddit dataset)
```bash
python3 train_sampling_unsupervised.py
```

Notably,

* The loss function is defined by predicting whether an edge exists between two nodes or not.  This matches the official
  implementation, and is equivalent to the loss defined in the paper with 1-hop random walks.
* When computing the score of `(u, v)`, the connections between node `u` and `v` are removed from neighbor sampling.
  This trick increases the F1-micro score on test set by 0.02.
* The performance of the learned embeddings are measured by training a softmax regression with scikit-learn, as described
  in the paper.

Micro F1 score reaches 0.9212 on test set.
69
70
71
72
73
74
75
76
77
78
79

### Training with PyTorch Lightning

We also provide minibatch training scripts with PyTorch Lightning in `train_lightning.py` and `train_lightning_unsupervised.py`.

Requires `pytorch_lightning` and `torchmetrics`.

```bash
python3 train_lightning.py
python3 train_lightning_unsupervised.py
```