@@ -8,24 +8,47 @@ The folder contains three different implementations using DGL.
...
@@ -8,24 +8,47 @@ The folder contains three different implementations using DGL.
Results
Results
-------
-------
These results are based on single-run training to minimize the cross-entropy loss of the first 20 examples in each class. To keep the demo simple, we did not use normalized graphs or repeated experiments as the original paper suggested, which may lead to slightly different results. However, the accuracies are within the same order of magnitudes.
These results are based on single-run training to minimize the cross-entropy loss of the first 20 examples in each class. We can see clear improvements of graph convolution networks (GCNs) over multi-layer perceptron (MLP) baselines. There are also some slight modifications from the original paper:
* We used more (up to 10) layers to demonstrate monotonic improvements as more neighbor information is used. Using GCN with more layers improves accuracy but can also increase the computational complexity. The original paper recommends n-layers=2 to balance speed and accuracy.
* We used concatenation of hidden units to account for multi-hop skip-connections. The original implementation used simple additions (while the original paper omitted this detail). We feel concatenation is superior because all neighboring information is presented without additional modeling assumptions.
* After the concatenation, we used a recursive model such that the (k+1)-th layer, storing information up to the (k+1)-distant neighbor, depends on the concatenation of all 1-to-k layers. However, activation is only applied to the new information in the concatenations.