Unverified Commit 9e04a52a authored by Quan (Andy) Gan's avatar Quan (Andy) Gan Committed by GitHub
Browse files

[Doc] Reorganize tutorial (#2678)

parent dda103d9
......@@ -197,8 +197,12 @@ from sphinx_gallery.sorting import FileNameSortKey
examples_dirs = ['../../tutorials/basics',
'../../tutorials/models',
'../../new-tutorial'] # path to find sources
gallery_dirs = ['tutorials/basics', 'tutorials/models', 'new-tutorial'] # path to generate docs
'../../new-tutorial/blitz',
'../../new-tutorial/large'] # path to find sources
gallery_dirs = ['tutorials/basics',
'tutorials/models',
'new-tutorial/blitz',
'new-tutorial/large'] # path to generate docs
reference_url = {
'dgl' : None,
'numpy': 'http://docs.scipy.org/doc/numpy/',
......
......@@ -84,27 +84,20 @@ Getting Started
.. toctree::
:maxdepth: 2
:caption: Basic Tutorials
:caption: Tutorials
:hidden:
:glob:
new-tutorial/1_introduction
new-tutorial/2_dglgraph
new-tutorial/3_message_passing
new-tutorial/4_link_predict
new-tutorial/5_graph_classification
new-tutorial/6_load_data
new-tutorial/blitz/index
new-tutorial/large/index
.. toctree::
:maxdepth: 2
:caption: Stochastic GNN Training Tutorials
:maxdepth: 3
:caption: Model Examples
:hidden:
:glob:
new-tutorial/L0_neighbor_sampling_overview
new-tutorial/L1_large_node_classification
new-tutorial/L2_large_link_prediction
new-tutorial/L4_message_passing
tutorials/models/index
.. toctree::
:maxdepth: 2
......@@ -113,14 +106,7 @@ Getting Started
:titlesonly:
:glob:
guide/graph
guide/message
guide/nn
guide/data
guide/training
guide/minibatch
guide/distributed
guide/mixed_precision
guide/index
.. toctree::
:maxdepth: 2
......@@ -139,14 +125,6 @@ Getting Started
api/python/dgl.sampling
api/python/udf
.. toctree::
:maxdepth: 3
:caption: Model Tutorials
:hidden:
:glob:
tutorials/models/index
.. toctree::
:maxdepth: 1
:caption: Developer Notes
......
"""
A Blitz Introduction to DGL - Node Classification
=================================================
Node Classification with DGL
============================
GNNs are powerful tools for many machine learning tasks on graphs. In
this introductory tutorial, you will learn the basic workflow of using
......
......@@ -2,7 +2,7 @@
Introduction of Neighbor Sampling for GNN Training
==================================================
In :doc:`previous tutorials <1_introduction>` you have learned how to
In :doc:`previous tutorials <../blitz/1_introduction>` you have learned how to
train GNNs by computing the representations of all nodes on a graph.
However, sometimes your graph is too large to fit the computation of all
nodes in a single GPU.
......@@ -20,7 +20,7 @@ By the end of this tutorial, you will be able to
# ----------------------
#
# Recall that in `Gilmer et al. <https://arxiv.org/abs/1704.01212>`__
# (also in :doc:`message passing tutorial <3_message_passing>`), the
# (also in :doc:`message passing tutorial <../blitz/3_message_passing>`), the
# message passing formulation is as follows:
#
# .. math::
......
......@@ -192,7 +192,7 @@ model = Model(num_features, 128, num_classes).to(device)
######################################################################
# If you compare against the code in the
# :doc:`introduction <1_introduction>`, you will notice several
# :doc:`introduction <../blitz/1_introduction>`, you will notice several
# differences:
#
# - **DGL GNN layers on bipartite graphs**. Instead of computing on the
......
......@@ -40,7 +40,7 @@ Sampling for Node Classification <L1_large_node_classification>`.
# \mathcal{L} = -\sum_{u\sim v\in \mathcal{D}}\left( y_{u\sim v}\log(\hat{y}_{u\sim v}) + (1-y_{u\sim v})\log(1-\hat{y}_{u\sim v})) \right)
#
# This is identical to the link prediction formulation in :doc:`the previous
# tutorial on link prediction <4_link_predict>`.
# tutorial on link prediction <../blitz/4_link_predict>`.
#
......@@ -83,7 +83,7 @@ test_nids = idx_split['test']
# ------------------------------------------------
#
# Different from the :doc:`link prediction tutorial for full
# graph <4_link_predict>`, a common practice to train GNN on large graphs is
# graph <../blitz/4_link_predict>`, a common practice to train GNN on large graphs is
# to iterate over the edges
# in minibatches, since computing the probability of all edges is usually
# impossible. For each minibatch of edges, you compute the output
......@@ -147,7 +147,7 @@ print(bipartites)
# The second element and the third element are the positive graph and the
# negative graph for this minibatch.
# The concept of positive and negative graphs have been introduced in the
# :doc:`full-graph link prediction tutorial <4_link_predict>`. In minibatch
# :doc:`full-graph link prediction tutorial <../blitz/4_link_predict>`. In minibatch
# training, the positive graph and the negative graph only contain nodes
# necessary for computing the pair-wise scores of positive and negative examples
# in the current minibatch.
......@@ -200,7 +200,7 @@ model = Model(num_features, 128).to(device)
# edges in the sampled minibatch.
#
# The following score predictor, copied from the :doc:`link prediction
# tutorial <4_link_predict>`, takes a dot product between the
# tutorial <../blitz/4_link_predict>`, takes a dot product between the
# incident nodes’ representations.
#
......
......@@ -7,7 +7,7 @@ tutorial teaches you how to write your own graph neural network module
for stochastic GNN training. It assumes that
1. You know :doc:`how to write GNN modules for full graph
training <3_message_passing>`.
training <../blitz/3_message_passing>`.
2. You know :doc:`how stochastic GNN training pipeline
works <L1_large_node_classification>`.
......@@ -137,7 +137,7 @@ m_v
######################################################################
# Putting them together, you can implement a GraphSAGE convolution for
# training with neighbor sampling as follows (the differences to the :doc:`full graph
# counterpart <3_message_passing>` are highlighted with arrows ``<---``)
# counterpart <../blitz/3_message_passing>` are highlighted with arrows ``<---``)
#
import torch.nn as nn
......@@ -223,7 +223,7 @@ with tqdm.tqdm(train_dataloader) as tq:
# ------------------------------------------------------------------------
#
# Here is a step-by-step tutorial for writing a GNN module for both
# :doc:`full-graph training <1_introduction>` *and* :doc:`stochastic
# :doc:`full-graph training <../blitz/1_introduction>` *and* :doc:`stochastic
# training <L1_node_classification>`.
#
# Say you start with a GNN module that works for full-graph training only:
......
......@@ -55,6 +55,12 @@ gcn_reduce = fn.sum(msg='m', out='h')
###############################################################################
# We then proceed to define the GCNLayer module. A GCNLayer essentially performs
# message passing on all the nodes then applies a fully-connected layer.
#
# .. note::
#
# This is showing how to implement a GCN from scratch. DGL provides a more
# efficient :class:`builtin GCN layer module <dgl.nn.pytorch.conv.GraphConv>`.
#
class GCNLayer(nn.Module):
def __init__(self, in_feats, out_feats):
......
......@@ -124,6 +124,11 @@ multiple edges among any given pair.
# the full weight matrix has three dimensions: relation, input_feature,
# output_feature.
#
# .. note::
#
# This is showing how to implement an R-GCN from scratch. DGL provides a more
# efficient :class:`builtin R-GCN layer module <dgl.nn.pytorch.conv.RelGraphConv>`.
#
import torch
import torch.nn as nn
......
......@@ -106,6 +106,12 @@ from dgl.nn.pytorch import GATConv
# To begin, you can get an overall impression about how a ``GATLayer`` module is
# implemented in DGL. In this section, the four equations above are broken down
# one at a time.
#
# .. note::
#
# This is showing how to implement a GAT from scratch. DGL provides a more
# efficient :class:`builtin GAT layer module <dgl.nn.pytorch.conv.GATConv>`.
#
import torch
import torch.nn as nn
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment