Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
dedfd908
"src/git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "ab1f01e63415b63937736299d3a770554c83987e"
Commit
dedfd908
authored
Jan 27, 2019
by
Mufei Li
Committed by
Minjie Wang
Jan 26, 2019
Browse files
[Fix] Fix link in batched graph blog (#367)
parent
1989f8f4
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
12 additions
and
12 deletions
+12
-12
tutorials/basics/4_batch.py
tutorials/basics/4_batch.py
+12
-12
No files found.
tutorials/basics/4_batch.py
View file @
dedfd908
...
@@ -12,12 +12,12 @@ Graph classification is an important problem
...
@@ -12,12 +12,12 @@ Graph classification is an important problem
with applications across many fields -- bioinformatics, chemoinformatics, social
with applications across many fields -- bioinformatics, chemoinformatics, social
network analysis, urban computing and cyber-security. Applying graph neural
network analysis, urban computing and cyber-security. Applying graph neural
networks to this problem has been a popular approach recently (
networks to this problem has been a popular approach recently (
`Ying et al., 2018 <https://arxiv.org/
pdf
/1806.08804
.pdf
>`_,
`Ying et al., 2018 <https://arxiv.org/
abs
/1806.08804>`_,
`Cangea et al., 2018 <https://arxiv.org/
pdf
/1811.01287
.pdf
>`_,
`Cangea et al., 2018 <https://arxiv.org/
abs
/1811.01287>`_,
`Knyazev et al., 2018 <https://arxiv.org/
pdf
/1811.09595
.pdf
>`_,
`Knyazev et al., 2018 <https://arxiv.org/
abs
/1811.09595>`_,
`Bianchi et al., 2019 <https://arxiv.org/
pdf
/1901.01343
.pdf
>`_,
`Bianchi et al., 2019 <https://arxiv.org/
abs
/1901.01343>`_,
`Liao et al., 2019 <https://arxiv.org/
pdf
/1901.01484
.pdf
>`_,
`Liao et al., 2019 <https://arxiv.org/
abs
/1901.01484>`_,
`Gao et al., 2019 <https://openreview.net/
pdf
?id=HJePRoAct7>`_).
`Gao et al., 2019 <https://openreview.net/
forum
?id=HJePRoAct7>`_).
This tutorial demonstrates:
This tutorial demonstrates:
* batching multiple graphs of variable size and shape with DGL
* batching multiple graphs of variable size and shape with DGL
...
@@ -162,7 +162,7 @@ class GCN(nn.Module):
...
@@ -162,7 +162,7 @@ class GCN(nn.Module):
#
#
# In DGL, :func:`dgl.mean_nodes` handles this task for a batch of
# In DGL, :func:`dgl.mean_nodes` handles this task for a batch of
# graphs with variable size. We then feed our graph representations into a
# graphs with variable size. We then feed our graph representations into a
# classifier with one linear layer
followed by :math:`\text{sigmoid}`
.
# classifier with one linear layer
to obtain pre-softmax logits
.
import
torch.nn.functional
as
F
import
torch.nn.functional
as
F
...
@@ -236,8 +236,8 @@ plt.show()
...
@@ -236,8 +236,8 @@ plt.show()
# of the tutorial, we restrict our running time and you are likely to get a higher
# of the tutorial, we restrict our running time and you are likely to get a higher
# accuracy (:math:`80` % ~ :math:`90` %) than the ones printed below.
# accuracy (:math:`80` % ~ :math:`90` %) than the ones printed below.
# Convert a list of tuples to two lists
model
.
eval
()
model
.
eval
()
# Convert a list of tuples to two lists
test_X
,
test_Y
=
map
(
list
,
zip
(
*
testset
))
test_X
,
test_Y
=
map
(
list
,
zip
(
*
testset
))
test_bg
=
dgl
.
batch
(
test_X
)
test_bg
=
dgl
.
batch
(
test_X
)
test_Y
=
torch
.
tensor
(
test_Y
).
float
().
view
(
-
1
,
1
)
test_Y
=
torch
.
tensor
(
test_Y
).
float
().
view
(
-
1
,
1
)
...
@@ -255,7 +255,7 @@ print('Accuracy of argmax predictions on the test set: {:4f}%'.format(
...
@@ -255,7 +255,7 @@ print('Accuracy of argmax predictions on the test set: {:4f}%'.format(
#
#
# .. image:: https://s3.us-east-2.amazonaws.com/dgl.ai/tutorial/batch/test_eval4.gif
# .. image:: https://s3.us-east-2.amazonaws.com/dgl.ai/tutorial/batch/test_eval4.gif
#
#
# To understand
how
the node/graph
features change over layers with
a trained model,
# To understand the node/graph
representations
a trained model
learnt
,
# we use `t-SNE, <https://lvdmaaten.github.io/tsne/>`_ for dimensionality reduction
# we use `t-SNE, <https://lvdmaaten.github.io/tsne/>`_ for dimensionality reduction
# and visualization.
# and visualization.
#
#
...
@@ -265,9 +265,9 @@ print('Accuracy of argmax predictions on the test set: {:4f}%'.format(
...
@@ -265,9 +265,9 @@ print('Accuracy of argmax predictions on the test set: {:4f}%'.format(
# .. image:: https://s3.us-east-2.amazonaws.com/dgl.ai/tutorial/batch/tsne_graph2.png
# .. image:: https://s3.us-east-2.amazonaws.com/dgl.ai/tutorial/batch/tsne_graph2.png
# :align: center
# :align: center
#
#
# The two small figures on the top separately visualize node
feature
s after :math:`1`,
# The two small figures on the top separately visualize node
representation
s after :math:`1`,
# :math:`2` layers of graph convolution and the figure on the bottom visualizes
# :math:`2` layers of graph convolution and the figure on the bottom visualizes
# the pre-softmax logits for graphs.
# the pre-softmax logits for graphs
as graph representations
.
#
#
# While the visualization does suggest some clustering effects of the node features,
# While the visualization does suggest some clustering effects of the node features,
# it is expected not to be a perfect result as node degrees are deterministic for
# it is expected not to be a perfect result as node degrees are deterministic for
...
@@ -279,7 +279,7 @@ print('Accuracy of argmax predictions on the test set: {:4f}%'.format(
...
@@ -279,7 +279,7 @@ print('Accuracy of argmax predictions on the test set: {:4f}%'.format(
# waiting for folks to bring more exciting discoveries! It is not easy as it
# waiting for folks to bring more exciting discoveries! It is not easy as it
# requires mapping different graphs to different embeddings while preserving
# requires mapping different graphs to different embeddings while preserving
# their structural similarity in the embedding space. To learn more about it,
# their structural similarity in the embedding space. To learn more about it,
# `"How Powerful Are Graph Neural Networks?" <https://arxiv.org/
pdf
/1810.00826
.pdf
>`_
# `"How Powerful Are Graph Neural Networks?" <https://arxiv.org/
abs
/1810.00826>`_
# in ICLR 2019 might be a good starting point.
# in ICLR 2019 might be a good starting point.
#
#
# With regards to more examples on batched graph processing, see
# With regards to more examples on batched graph processing, see
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment