Unverified Commit f99725ad authored by Chao Ma's avatar Chao Ma Committed by GitHub
Browse files

all demo use python-3 (#555)

parent 605b5185
......@@ -19,5 +19,5 @@ pip install requests
### Usage (make sure that DGLBACKEND is changed into mxnet)
```bash
DGLBACKEND=mxnet python gat_batch.py --dataset cora --gpu 0 --num-heads 8
DGLBACKEND=mxnet python3 gat_batch.py --dataset cora --gpu 0 --num-heads 8
```
......@@ -22,15 +22,15 @@ Example code was tested with rdflib 4.2.2 and pandas 0.23.4
### Entity Classification
AIFB: accuracy 97.22% (DGL), 95.83% (paper)
```
DGLBACKEND=mxnet python entity_classify.py -d aifb --testing --gpu 0
DGLBACKEND=mxnet python3 entity_classify.py -d aifb --testing --gpu 0
```
MUTAG: accuracy 76.47% (DGL), 73.23% (paper)
```
DGLBACKEND=mxnet python entity_classify.py -d mutag --l2norm 5e-4 --n-bases 40 --testing --gpu 0
DGLBACKEND=mxnet python3 entity_classify.py -d mutag --l2norm 5e-4 --n-bases 40 --testing --gpu 0
```
BGS: accuracy 79.31% (DGL, n-basese=20, OOM when >20), 83.10% (paper)
```
DGLBACKEND=mxnet python entity_classify.py -d bgs --l2norm 5e-4 --n-bases 20 --testing --gpu 0 --relabel
DGLBACKEND=mxnet python3 entity_classify.py -d bgs --l2norm 5e-4 --n-bases 20 --testing --gpu 0 --relabel
```
......@@ -15,44 +15,44 @@ pip install mxnet --pre
### Neighbor Sampling & Skip Connection
cora: test accuracy ~83% with `--num-neighbors 2`, ~84% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset cora --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset cora --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
```
citeseer: test accuracy ~69% with `--num-neighbors 2`, ~70% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
```
pubmed: test accuracy ~78% with `--num-neighbors 3`, ~77% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000
```
reddit: test accuracy ~91% with `--num-neighbors 3` and `--batch-size 1000`, ~93% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset reddit-self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000 --n-hidden 64
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset reddit-self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000 --n-hidden 64
```
### Control Variate & Skip Connection
cora: test accuracy ~84% with `--num-neighbors 1`, ~84% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
```
citeseer: test accuracy ~69% with `--num-neighbors 1`, ~70% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
```
pubmed: test accuracy ~79% with `--num-neighbors 1`, ~77% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
```
reddit: test accuracy ~93% with `--num-neighbors 1` and `--batch-size 1000`, ~93% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset reddit-self-loop --num-neighbors 1 --batch-size 10000 --test-batch-size 5000 --n-hidden 64
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset reddit-self-loop --num-neighbors 1 --batch-size 10000 --test-batch-size 5000 --n-hidden 64
```
### Control Variate & GraphSAGE-mean
......@@ -61,7 +61,7 @@ Following [Control Variate](https://arxiv.org/abs/1710.10568), we use the mean p
reddit: test accuracy 96.1% with `--num-neighbors 1` and `--batch-size 1000`, ~96.2% in [Control Variate](https://arxiv.org/abs/1710.10568) with `--num-neighbors 2` and `--batch-size 1000`
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model graphsage_cv --batch-size 1000 --test-batch-size 5000 --n-epochs 50 --dataset reddit --num-neighbors 1 --n-hidden 128 --dropout 0.2 --weight-decay 0
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model graphsage_cv --batch-size 1000 --test-batch-size 5000 --n-epochs 50 --dataset reddit --num-neighbors 1 --n-hidden 128 --dropout 0.2 --weight-decay 0
```
### Run multi-processing training
......
......@@ -21,7 +21,7 @@ The script will download the [SST dataset] (http://nlp.stanford.edu/sentiment/in
## Usage
```
python train.py --gpu 0
python3 train.py --gpu 0
```
## Speed Test
......
......@@ -22,7 +22,7 @@ Results
Run with following (available dataset: "cora", "citeseer", "pubmed")
```bash
python train.py --dataset cora --gpu 0
python3 train.py --dataset cora --gpu 0
```
* cora: 0.8370 (paper: 0.850)
......
......@@ -17,7 +17,7 @@ Training & Evaluation
----------------------
```bash
# Run with default config
python main.py
python3 main.py
# Run with train and test batch size 128, and for 50 epochs
python main.py --batch-size 128 --test-batch-size 128 --epochs 50
python3 main.py --batch-size 128 --test-batch-size 128 --epochs 50
```
......@@ -20,15 +20,15 @@ How to run
Run with following:
```bash
python train.py --dataset=cora --gpu=0 --self-loop
python3 train.py --dataset=cora --gpu=0 --self-loop
```
```bash
python train.py --dataset=citeseer --gpu=0
python3 train.py --dataset=citeseer --gpu=0
```
```bash
python train.py --dataset=pubmed --gpu=0
python3 train.py --dataset=pubmed --gpu=0
```
Results
......
......@@ -10,8 +10,8 @@ Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, Peter Battaglia.
## Usage
- Train with batch size 1: `python main.py`
- Train with batch size larger than 1: `python main_batch.py`.
- Train with batch size 1: `python3 main.py`
- Train with batch size larger than 1: `python3 main_batch.py`.
## Performance
......
......@@ -23,19 +23,19 @@ How to run
Run with following:
```bash
python train.py --dataset=cora --gpu=0
python3 train.py --dataset=cora --gpu=0
```
```bash
python train.py --dataset=citeseer --gpu=0
python3 train.py --dataset=citeseer --gpu=0
```
```bash
python train.py --dataset=pubmed --gpu=0 --num-out-heads=8 --weight-decay=0.001
python3 train.py --dataset=pubmed --gpu=0 --num-out-heads=8 --weight-decay=0.001
```
```bash
python train_ppi.py --gpu=0
python3 train_ppi.py --gpu=0
```
Results
......
......@@ -28,7 +28,7 @@ Results
Run with following (available dataset: "cora", "citeseer", "pubmed")
```bash
python train.py --dataset cora --gpu 0 --self-loop
python3 train.py --dataset cora --gpu 0 --self-loop
```
* cora: ~0.810 (0.79-0.83) (paper: 0.815)
......
......@@ -20,12 +20,12 @@ How to run
An experiment on the GIN in default settings can be run with
```bash
python main.py
python3 main.py
```
An experiment on the GIN in customized settings can be run with
```bash
python main.py [--device 0 | --disable-cuda] --dataset COLLAB \
python3 main.py [--device 0 | --disable-cuda] --dataset COLLAB \
--graph_pooling_type max --neighbor_pooling_type sum
```
......@@ -35,7 +35,7 @@ Results
Run with following with the double SUM pooling way:
(tested dataset: "MUTAG"(default), "COLLAB", "IMDBBINARY", "IMDBMULTI")
```bash
python train.py --dataset MUTAB --device 0 \
python3 train.py --dataset MUTAB --device 0 \
--graph_pooling_type sum --neighbor_pooling_type sum
```
......
......@@ -19,7 +19,7 @@ Results
Run with following (available dataset: "cora", "citeseer", "pubmed")
```bash
python graphsage.py --dataset cora --gpu 0
python3 graphsage.py --dataset cora --gpu 0
```
* cora: ~0.8470
......
......@@ -22,12 +22,12 @@ How to run
An experiment on the Stochastic Block Model in default settings can be run with
```bash
python train.py
python3 train.py
```
An experiment on the Stochastic Block Model in customized settings can be run with
```bash
python train.py --batch-size BATCH_SIZE --gpu GPU --n-communities N_COMMUNITIES \
python3 train.py --batch-size BATCH_SIZE --gpu GPU --n-communities N_COMMUNITIES \
--n-features N_FEATURES --n-graphs N_GRAPH --n-iterations N_ITERATIONS \
--n-layers N_LAYER --n-nodes N_NODE --model-path MODEL_PATH --radius RADIUS
```
......@@ -16,32 +16,32 @@ pip install torch requests
### Neighbor Sampling & Skip Connection
cora: test accuracy ~83% with --num-neighbors 2, ~84% by training on the full graph
```
python gcn_ns_sc.py --dataset cora --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_ns_sc.py --dataset cora --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```
citeseer: test accuracy ~69% with --num-neighbors 2, ~70% by training on the full graph
```
python gcn_ns_sc.py --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_ns_sc.py --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```
pubmed: test accuracy ~76% with --num-neighbors 3, ~77% by training on the full graph
```
python gcn_ns_sc.py --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_ns_sc.py --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```
### Control Variate & Skip Connection
cora: test accuracy ~84% with --num-neighbors 1, ~84% by training on the full graph
```
python gcn_cv_sc.py --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_cv_sc.py --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```
citeseer: test accuracy ~69% with --num-neighbors 1, ~70% by training on the full graph
```
python gcn_cv_sc.py --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_cv_sc.py --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```
pubmed: test accuracy ~77% with --num-neighbors 1, ~77% by training on the full graph
```
python gcn_cv_sc.py --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_cv_sc.py --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```
......@@ -22,9 +22,9 @@ Results
Run with following (available dataset: "cora", "citeseer", "pubmed")
```bash
python sgc.py --dataset cora --gpu 0
python sgc.py --dataset citeseer --weight-decay 5e-5 --n-epochs 150 --bias --gpu 0
python sgc.py --dataset pubmed --weight-decay 5e-5 --bias --gpu 0
python3 sgc.py --dataset cora --gpu 0
python3 sgc.py --dataset citeseer --weight-decay 5e-5 --n-epochs 150 --bias --gpu 0
python3 sgc.py --dataset pubmed --weight-decay 5e-5 --bias --gpu 0
```
On NVIDIA V100
......
......@@ -15,13 +15,13 @@ The folder contains training module and inferencing module (beam decoder) for Tr
- For training:
```
python translation_train.py [--gpus id1,id2,...] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--universal]
python3 translation_train.py [--gpus id1,id2,...] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--universal]
```
- For evaluating BLEU score on test set(by enabling `--print` to see translated text):
```
python translation_test.py [--gpu id] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--checkpoint CHECKPOINT] [--print] [--universal]
python3 translation_test.py [--gpu id] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--checkpoint CHECKPOINT] [--print] [--universal]
```
Available datasets: `copy`, `sort`, `wmt14`, `multi30k`(default).
......
......@@ -24,7 +24,7 @@ pip install torch requests nltk
## Usage
```
python train.py --gpu 0
python3 train.py --gpu 0
```
## Speed
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment