"git@developer.sourcefind.cn:OpenDAS/dgl.git" did not exist on "73b9c6f18c9737f96e91be6ae8a2e6d794be9a69"
Commit 02eb463a authored by yifeim's avatar yifeim Committed by Da Zheng
Browse files

report accuracy and fix training mask (#181)

parent 2c2b7478
...@@ -6,6 +6,26 @@ Author's code repo: [https://github.com/tkipf/gcn](https://github.com/tkipf/gcn) ...@@ -6,6 +6,26 @@ Author's code repo: [https://github.com/tkipf/gcn](https://github.com/tkipf/gcn)
The folder contains three different implementations using DGL. The folder contains three different implementations using DGL.
Results
-------
These results are based on single-run training to minimize the cross-entropy loss of the first 20 examples in each class. To keep the demo simple, we did not use normalized graphs or repeated experiments as the original paper suggested, which may lead to slightly different results. However, the accuracies are within the same order of magnitudes.
```
# Final accuracy 72.90%
DGLBACKEND=mxnet python3 examples/mxnet/gcn/gcn_batch.py --dataset "citeseer" --n-epochs 200 --gpu 1
```
```
# Final accuracy 83.11%
DGLBACKEND=mxnet python3 examples/mxnet/gcn/gcn_batch.py --dataset "cora" --n-epochs 200 --gpu 1
```
```
# Final accuracy 82.99%
DGLBACKEND=mxnet python3 examples/mxnet/gcn/gcn_batch.py --dataset "pubmed" --n-epochs 200 --gpu 1
```
Naive GCN (gcn.py) Naive GCN (gcn.py)
------- -------
The model is defined in the finest granularity (aka on *one* edge and *one* node). The model is defined in the finest granularity (aka on *one* edge and *one* node).
......
...@@ -90,6 +90,7 @@ def main(args): ...@@ -90,6 +90,7 @@ def main(args):
'relu', 'relu',
args.dropout) args.dropout)
model.initialize(ctx=ctx) model.initialize(ctx=ctx)
loss_fcn = gluon.loss.SoftmaxCELoss()
# use optimizer # use optimizer
trainer = gluon.Trainer(model.collect_params(), 'adam', {'learning_rate': args.lr}) trainer = gluon.Trainer(model.collect_params(), 'adam', {'learning_rate': args.lr})
...@@ -101,8 +102,8 @@ def main(args): ...@@ -101,8 +102,8 @@ def main(args):
t0 = time.time() t0 = time.time()
# forward # forward
with mx.autograd.record(): with mx.autograd.record():
logits = model(features) pred = model(features)
loss = mx.nd.softmax_cross_entropy(logits, labels) loss = loss_fcn(pred, labels, mask)
#optimizer.zero_grad() #optimizer.zero_grad()
loss.backward() loss.backward()
...@@ -113,6 +114,12 @@ def main(args): ...@@ -113,6 +114,12 @@ def main(args):
print("Epoch {:05d} | Loss {:.4f} | Time(s) {:.4f} | ETputs(KTEPS) {:.2f}".format( print("Epoch {:05d} | Loss {:.4f} | Time(s) {:.4f} | ETputs(KTEPS) {:.2f}".format(
epoch, loss.asnumpy()[0], np.mean(dur), n_edges / np.mean(dur) / 1000)) epoch, loss.asnumpy()[0], np.mean(dur), n_edges / np.mean(dur) / 1000))
# test set accuracy
pred = model(features)
accuracy = (pred*100).softmax().pick(labels).mean()
print("Final accuracy {:.2%}".format(accuracy.mean().asscalar()))
if __name__ == '__main__': if __name__ == '__main__':
parser = argparse.ArgumentParser(description='GCN') parser = argparse.ArgumentParser(description='GCN')
register_data_args(parser) register_data_args(parser)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment