Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
9e04a52a
Unverified
Commit
9e04a52a
authored
Feb 19, 2021
by
Quan (Andy) Gan
Committed by
GitHub
Feb 19, 2021
Browse files
[Doc] Reorganize tutorial (#2678)
parent
dda103d9
Changes
15
Hide whitespace changes
Inline
Side-by-side
Showing
15 changed files
with
42 additions
and
43 deletions
+42
-43
docs/source/conf.py
docs/source/conf.py
+6
-2
docs/source/index.rst
docs/source/index.rst
+7
-29
new-tutorial/blitz/1_introduction.py
new-tutorial/blitz/1_introduction.py
+2
-2
new-tutorial/blitz/2_dglgraph.py
new-tutorial/blitz/2_dglgraph.py
+0
-0
new-tutorial/blitz/3_message_passing.py
new-tutorial/blitz/3_message_passing.py
+0
-0
new-tutorial/blitz/4_link_predict.py
new-tutorial/blitz/4_link_predict.py
+0
-0
new-tutorial/blitz/5_graph_classification.py
new-tutorial/blitz/5_graph_classification.py
+0
-0
new-tutorial/blitz/6_load_data.py
new-tutorial/blitz/6_load_data.py
+0
-0
new-tutorial/large/L0_neighbor_sampling_overview.py
new-tutorial/large/L0_neighbor_sampling_overview.py
+2
-2
new-tutorial/large/L1_large_node_classification.py
new-tutorial/large/L1_large_node_classification.py
+1
-1
new-tutorial/large/L2_large_link_prediction.py
new-tutorial/large/L2_large_link_prediction.py
+4
-4
new-tutorial/large/L4_message_passing.py
new-tutorial/large/L4_message_passing.py
+3
-3
tutorials/models/1_gnn/1_gcn.py
tutorials/models/1_gnn/1_gcn.py
+6
-0
tutorials/models/1_gnn/4_rgcn.py
tutorials/models/1_gnn/4_rgcn.py
+5
-0
tutorials/models/1_gnn/9_gat.py
tutorials/models/1_gnn/9_gat.py
+6
-0
No files found.
docs/source/conf.py
View file @
9e04a52a
...
...
@@ -197,8 +197,12 @@ from sphinx_gallery.sorting import FileNameSortKey
examples_dirs
=
[
'../../tutorials/basics'
,
'../../tutorials/models'
,
'../../new-tutorial'
]
# path to find sources
gallery_dirs
=
[
'tutorials/basics'
,
'tutorials/models'
,
'new-tutorial'
]
# path to generate docs
'../../new-tutorial/blitz'
,
'../../new-tutorial/large'
]
# path to find sources
gallery_dirs
=
[
'tutorials/basics'
,
'tutorials/models'
,
'new-tutorial/blitz'
,
'new-tutorial/large'
]
# path to generate docs
reference_url
=
{
'dgl'
:
None
,
'numpy'
:
'http://docs.scipy.org/doc/numpy/'
,
...
...
docs/source/index.rst
View file @
9e04a52a
...
...
@@ -84,27 +84,20 @@ Getting Started
.. toctree::
:maxdepth: 2
:caption:
Basic
Tutorials
:caption: Tutorials
:hidden:
:glob:
new-tutorial/1_introduction
new-tutorial/2_dglgraph
new-tutorial/3_message_passing
new-tutorial/4_link_predict
new-tutorial/5_graph_classification
new-tutorial/6_load_data
new-tutorial/blitz/index
new-tutorial/large/index
.. toctree::
:maxdepth:
2
:caption:
Stochastic GNN Training Tutorial
s
:maxdepth:
3
:caption:
Model Example
s
:hidden:
:glob:
new-tutorial/L0_neighbor_sampling_overview
new-tutorial/L1_large_node_classification
new-tutorial/L2_large_link_prediction
new-tutorial/L4_message_passing
tutorials/models/index
.. toctree::
:maxdepth: 2
...
...
@@ -113,14 +106,7 @@ Getting Started
:titlesonly:
:glob:
guide/graph
guide/message
guide/nn
guide/data
guide/training
guide/minibatch
guide/distributed
guide/mixed_precision
guide/index
.. toctree::
:maxdepth: 2
...
...
@@ -139,14 +125,6 @@ Getting Started
api/python/dgl.sampling
api/python/udf
.. toctree::
:maxdepth: 3
:caption: Model Tutorials
:hidden:
:glob:
tutorials/models/index
.. toctree::
:maxdepth: 1
:caption: Developer Notes
...
...
new-tutorial/1_introduction.py
→
new-tutorial/
blitz/
1_introduction.py
View file @
9e04a52a
"""
A Blitz Introduction to DGL -
Node Classification
============================
=====================
Node Classification
with DGL
============================
GNNs are powerful tools for many machine learning tasks on graphs. In
this introductory tutorial, you will learn the basic workflow of using
...
...
new-tutorial/2_dglgraph.py
→
new-tutorial/
blitz/
2_dglgraph.py
View file @
9e04a52a
File moved
new-tutorial/3_message_passing.py
→
new-tutorial/
blitz/
3_message_passing.py
View file @
9e04a52a
File moved
new-tutorial/4_link_predict.py
→
new-tutorial/
blitz/
4_link_predict.py
View file @
9e04a52a
File moved
new-tutorial/5_graph_classification.py
→
new-tutorial/
blitz/
5_graph_classification.py
View file @
9e04a52a
File moved
new-tutorial/6_load_data.py
→
new-tutorial/
blitz/
6_load_data.py
View file @
9e04a52a
File moved
new-tutorial/L0_neighbor_sampling_overview.py
→
new-tutorial/
large/
L0_neighbor_sampling_overview.py
View file @
9e04a52a
...
...
@@ -2,7 +2,7 @@
Introduction of Neighbor Sampling for GNN Training
==================================================
In :doc:`previous tutorials <1_introduction>` you have learned how to
In :doc:`previous tutorials <
../blitz/
1_introduction>` you have learned how to
train GNNs by computing the representations of all nodes on a graph.
However, sometimes your graph is too large to fit the computation of all
nodes in a single GPU.
...
...
@@ -20,7 +20,7 @@ By the end of this tutorial, you will be able to
# ----------------------
#
# Recall that in `Gilmer et al. <https://arxiv.org/abs/1704.01212>`__
# (also in :doc:`message passing tutorial <3_message_passing>`), the
# (also in :doc:`message passing tutorial <
../blitz/
3_message_passing>`), the
# message passing formulation is as follows:
#
# .. math::
...
...
new-tutorial/L1_large_node_classification.py
→
new-tutorial/
large/
L1_large_node_classification.py
View file @
9e04a52a
...
...
@@ -192,7 +192,7 @@ model = Model(num_features, 128, num_classes).to(device)
######################################################################
# If you compare against the code in the
# :doc:`introduction <1_introduction>`, you will notice several
# :doc:`introduction <
../blitz/
1_introduction>`, you will notice several
# differences:
#
# - **DGL GNN layers on bipartite graphs**. Instead of computing on the
...
...
new-tutorial/L2_large_link_prediction.py
→
new-tutorial/
large/
L2_large_link_prediction.py
View file @
9e04a52a
...
...
@@ -40,7 +40,7 @@ Sampling for Node Classification <L1_large_node_classification>`.
# \mathcal{L} = -\sum_{u\sim v\in \mathcal{D}}\left( y_{u\sim v}\log(\hat{y}_{u\sim v}) + (1-y_{u\sim v})\log(1-\hat{y}_{u\sim v})) \right)
#
# This is identical to the link prediction formulation in :doc:`the previous
# tutorial on link prediction <4_link_predict>`.
# tutorial on link prediction <
../blitz/
4_link_predict>`.
#
...
...
@@ -83,7 +83,7 @@ test_nids = idx_split['test']
# ------------------------------------------------
#
# Different from the :doc:`link prediction tutorial for full
# graph <4_link_predict>`, a common practice to train GNN on large graphs is
# graph <
../blitz/
4_link_predict>`, a common practice to train GNN on large graphs is
# to iterate over the edges
# in minibatches, since computing the probability of all edges is usually
# impossible. For each minibatch of edges, you compute the output
...
...
@@ -147,7 +147,7 @@ print(bipartites)
# The second element and the third element are the positive graph and the
# negative graph for this minibatch.
# The concept of positive and negative graphs have been introduced in the
# :doc:`full-graph link prediction tutorial <4_link_predict>`. In minibatch
# :doc:`full-graph link prediction tutorial <
../blitz/
4_link_predict>`. In minibatch
# training, the positive graph and the negative graph only contain nodes
# necessary for computing the pair-wise scores of positive and negative examples
# in the current minibatch.
...
...
@@ -200,7 +200,7 @@ model = Model(num_features, 128).to(device)
# edges in the sampled minibatch.
#
# The following score predictor, copied from the :doc:`link prediction
# tutorial <4_link_predict>`, takes a dot product between the
# tutorial <
../blitz/
4_link_predict>`, takes a dot product between the
# incident nodes’ representations.
#
...
...
new-tutorial/L4_message_passing.py
→
new-tutorial/
large/
L4_message_passing.py
View file @
9e04a52a
...
...
@@ -7,7 +7,7 @@ tutorial teaches you how to write your own graph neural network module
for stochastic GNN training. It assumes that
1. You know :doc:`how to write GNN modules for full graph
training <3_message_passing>`.
training <
../blitz/
3_message_passing>`.
2. You know :doc:`how stochastic GNN training pipeline
works <L1_large_node_classification>`.
...
...
@@ -137,7 +137,7 @@ m_v
######################################################################
# Putting them together, you can implement a GraphSAGE convolution for
# training with neighbor sampling as follows (the differences to the :doc:`full graph
# counterpart <3_message_passing>` are highlighted with arrows ``<---``)
# counterpart <
../blitz/
3_message_passing>` are highlighted with arrows ``<---``)
#
import
torch.nn
as
nn
...
...
@@ -223,7 +223,7 @@ with tqdm.tqdm(train_dataloader) as tq:
# ------------------------------------------------------------------------
#
# Here is a step-by-step tutorial for writing a GNN module for both
# :doc:`full-graph training <1_introduction>` *and* :doc:`stochastic
# :doc:`full-graph training <
../blitz/
1_introduction>` *and* :doc:`stochastic
# training <L1_node_classification>`.
#
# Say you start with a GNN module that works for full-graph training only:
...
...
tutorials/models/1_gnn/1_gcn.py
View file @
9e04a52a
...
...
@@ -55,6 +55,12 @@ gcn_reduce = fn.sum(msg='m', out='h')
###############################################################################
# We then proceed to define the GCNLayer module. A GCNLayer essentially performs
# message passing on all the nodes then applies a fully-connected layer.
#
# .. note::
#
# This is showing how to implement a GCN from scratch. DGL provides a more
# efficient :class:`builtin GCN layer module <dgl.nn.pytorch.conv.GraphConv>`.
#
class
GCNLayer
(
nn
.
Module
):
def
__init__
(
self
,
in_feats
,
out_feats
):
...
...
tutorials/models/1_gnn/4_rgcn.py
View file @
9e04a52a
...
...
@@ -124,6 +124,11 @@ multiple edges among any given pair.
# the full weight matrix has three dimensions: relation, input_feature,
# output_feature.
#
# .. note::
#
# This is showing how to implement an R-GCN from scratch. DGL provides a more
# efficient :class:`builtin R-GCN layer module <dgl.nn.pytorch.conv.RelGraphConv>`.
#
import
torch
import
torch.nn
as
nn
...
...
tutorials/models/1_gnn/9_gat.py
View file @
9e04a52a
...
...
@@ -106,6 +106,12 @@ from dgl.nn.pytorch import GATConv
# To begin, you can get an overall impression about how a ``GATLayer`` module is
# implemented in DGL. In this section, the four equations above are broken down
# one at a time.
#
# .. note::
#
# This is showing how to implement a GAT from scratch. DGL provides a more
# efficient :class:`builtin GAT layer module <dgl.nn.pytorch.conv.GATConv>`.
#
import
torch
import
torch.nn
as
nn
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment