"src/array/cuda/sddmm.hip" did not exist on "619d735df5dc2a62eca5a00e11e4290407169cb1"
Unverified Commit 9779c026 authored by Mufei Li's avatar Mufei Li Committed by GitHub
Browse files

Fix tutorial (#1587)

parent 2b9d06b6
...@@ -126,6 +126,8 @@ def evaluate(model, g, features, labels, mask): ...@@ -126,6 +126,8 @@ def evaluate(model, g, features, labels, mask):
import time import time
import numpy as np import numpy as np
g, features, labels, train_mask, test_mask = load_cora_data() g, features, labels, train_mask, test_mask = load_cora_data()
# Add edges between each node and itself to preserve old node representations
g.add_edges(g.nodes(), g.nodes())
optimizer = th.optim.Adam(net.parameters(), lr=1e-2) optimizer = th.optim.Adam(net.parameters(), lr=1e-2)
dur = [] dur = []
for epoch in range(50): for epoch in range(50):
...@@ -159,14 +161,14 @@ for epoch in range(50): ...@@ -159,14 +161,14 @@ for epoch in range(50):
# #
# Here, :math:`H^{(l)}` denotes the :math:`l^{th}` layer in the network, # Here, :math:`H^{(l)}` denotes the :math:`l^{th}` layer in the network,
# :math:`\sigma` is the non-linearity, and :math:`W` is the weight matrix for # :math:`\sigma` is the non-linearity, and :math:`W` is the weight matrix for
# this layer. :math:`D` and :math:`A`, as commonly seen, represent degree # this layer. :math:`\tilde{D}` and :math:`\tilde{A}` are separately the degree
# matrix and adjacency matrix, respectively. The ~ is a renormalization trick # and adjacency matrices for the graph. With the superscript ~, we are referring
# in which we add a self-connection to each node of the graph, and build the # to the variant where we add additional edges between each node and itself to
# corresponding degree and adjacency matrix. The shape of the input # preserve its old representation in graph convolutions. The shape of the input
# :math:`H^{(0)}` is :math:`N \times D`, where :math:`N` is the number of nodes # :math:`H^{(0)}` is :math:`N \times D`, where :math:`N` is the number of nodes
# and :math:`D` is the number of input features. We can chain up multiple # and :math:`D` is the number of input features. We can chain up multiple
# layers as such to produce a node-level representation output with shape # layers as such to produce a node-level representation output with shape
# :math`N \times F`, where :math:`F` is the dimension of the output node # :math:`N \times F`, where :math:`F` is the dimension of the output node
# feature vector. # feature vector.
# #
# The equation can be efficiently implemented using sparse matrix # The equation can be efficiently implemented using sparse matrix
...@@ -174,3 +176,8 @@ for epoch in range(50): ...@@ -174,3 +176,8 @@ for epoch in range(50):
# `pygcn <https://github.com/tkipf/pygcn>`_ code). The above DGL implementation # `pygcn <https://github.com/tkipf/pygcn>`_ code). The above DGL implementation
# in fact has already used this trick due to the use of builtin functions. To # in fact has already used this trick due to the use of builtin functions. To
# understand what is under the hood, please read our tutorial on :doc:`PageRank <../../basics/3_pagerank>`. # understand what is under the hood, please read our tutorial on :doc:`PageRank <../../basics/3_pagerank>`.
#
# Note that the tutorial code implements a simplified version of GCN where we
# replace :math:`\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}` with
# :math:`\tilde{A}`. For a full implementation, see our example
# `here <https://github.com/dmlc/dgl/tree/master/examples/pytorch/gcn>`_.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment