"src/git@developer.sourcefind.cn:OpenDAS/dgl.git" did not exist on "ce6e19f23397d1aa1dacd7e46f914cf74c55b9d7"
Commit 8ae9770f authored by John Andrilla's avatar John Andrilla Committed by Minjie Wang
Browse files

[Doc] Graph neural network and its variant Edit for grammar and style (#992)



* Edit for grammar and style

Improve readability

* Update tutorials/models/1_gnn/README.txt
Co-Authored-By: default avatarAaron Markham <markhama@amazon.com>

* Update tutorials/models/1_gnn/README.txt
Co-Authored-By: default avatarAaron Markham <markhama@amazon.com>
parent c9ac6c98
.. _tutorials1-index: .. _tutorials1-index:
Graph Neural Network and its variant Graph neural networks and its variants
==================================== ====================================
* **GCN** `[paper] <https://arxiv.org/abs/1609.02907>`__ `[tutorial] * **Graph convolutional network (GCN)** `[research paper] <https://arxiv.org/abs/1609.02907>`__ `[tutorial]
<1_gnn/1_gcn.html>`__ `[Pytorch code] <1_gnn/1_gcn.html>`__ `[Pytorch code]
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/gcn>`__ <https://github.com/dmlc/dgl/blob/master/examples/pytorch/gcn>`__
`[MXNet code] `[MXNet code]
<https://github.com/dmlc/dgl/tree/master/examples/mxnet/gcn>`__: <https://github.com/dmlc/dgl/tree/master/examples/mxnet/gcn>`__:
this is the vanilla GCN. The tutorial covers the basic uses of DGL APIs. This is the most basic GCN. The tutorial covers the basic uses of DGL APIs.
* **GAT** `[paper] <https://arxiv.org/abs/1710.10903>`__ `[tutorial] * **Graph attention network (GAT)** `[research paper] <https://arxiv.org/abs/1710.10903>`__ `[tutorial]
<1_gnn/9_gat.html>`__ `[Pytorch code] <1_gnn/9_gat.html>`__ `[Pytorch code]
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/gat>`__ <https://github.com/dmlc/dgl/blob/master/examples/pytorch/gat>`__
`[MXNet code] `[MXNet code]
<https://github.com/dmlc/dgl/tree/master/examples/mxnet/gat>`__: <https://github.com/dmlc/dgl/tree/master/examples/mxnet/gat>`__:
the key extension of GAT w.r.t vanilla GCN is deploying multi-head attention GAT extends the GCN functionality by deploying multi-head attention
among neighborhood of a node, thus greatly enhances the capacity and among neighborhood of a node. This greatly enhances the capacity and
expressiveness of the model. expressiveness of the model.
* **R-GCN** `[paper] <https://arxiv.org/abs/1703.06103>`__ `[tutorial] * **Relational-GCN** `[research paper] <https://arxiv.org/abs/1703.06103>`__ `[tutorial]
<1_gnn/4_rgcn.html>`__ `[Pytorch code] <1_gnn/4_rgcn.html>`__ `[Pytorch code]
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/rgcn>`__ <https://github.com/dmlc/dgl/tree/master/examples/pytorch/rgcn>`__
`[MXNet code] `[MXNet code]
<https://github.com/dmlc/dgl/tree/master/examples/mxnet/rgcn>`__: <https://github.com/dmlc/dgl/tree/master/examples/mxnet/rgcn>`__:
the key difference of RGNN is to allow multi-edges among two entities of a Relational-GCN allows multiple edges among two entities of a
graph, and edges with distinct relationships are encoded differently. This graph. Edges with distinct relationships are encoded differently.
is an interesting extension of GCN that can have a lot of applications of
its own.
* **LGNN** `[paper] <https://arxiv.org/abs/1705.08415>`__ `[tutorial] * **Line graph neural network (LGNN)** `[research paper] <https://arxiv.org/abs/1705.08415>`__ `[tutorial]
<1_gnn/6_line_graph.html>`__ `[Pytorch code] <1_gnn/6_line_graph.html>`__ `[Pytorch code]
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/line_graph>`__: <https://github.com/dmlc/dgl/tree/master/examples/pytorch/line_graph>`__:
this model focuses on community detection by inspecting graph structures. It This network focuses on community detection by inspecting graph structures. It
uses representations of both the original graph and its line-graph uses representations of both the original graph and its line-graph
companion. In addition to demonstrate how an algorithm can harness multiple companion. In addition to demonstrating how an algorithm can harness multiple
graphs, our implementation shows how one can judiciously mix vanilla tensor graphs, this implementation shows how you can judiciously mix simple tensor
operation, sparse-matrix tensor operations, along with message-passing with operations and sparse-matrix tensor operations, along with message-passing with
DGL. DGL.
* **SSE** `[paper] <http://proceedings.mlr.press/v80/dai18a/dai18a.pdf>`__ `[tutorial] * **Stochastic steady-state embedding (SSE)** `[research paper] <http://proceedings.mlr.press/v80/dai18a/dai18a.pdf>`__ `[tutorial]
<1_gnn/8_sse_mx.html>`__ `[MXNet code] <1_gnn/8_sse_mx.html>`__ `[MXNet code]
<https://github.com/dmlc/dgl/blob/master/examples/mxnet/sse>`__: <https://github.com/dmlc/dgl/blob/master/examples/mxnet/sse>`__:
the emphasize here is *giant* graph that cannot fit comfortably on one GPU SSE is an example to illustrate the co-design of both algorithm and
card. SSE is an example to illustrate the co-design of both algorithm and system. Sampling to guarantee asymptotic convergence while lowering
system: sampling to guarantee asymptotic convergence while lowering the complexity and batching across samples for maximum parallelism. The emphasis
complexity, and batching across samples for maximum parallelism. here is that a giant graph that cannot fit comfortably on one GPU card.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment