README.txt 2.44 KB
Newer Older
1
2
3
.. _tutorials1-index:

Graph Neural Network and its variant
4
====================================
5

6
* **GCN** `[paper] <https://arxiv.org/abs/1609.02907>`__ `[tutorial]
Da Zheng's avatar
Da Zheng committed
7
8
9
10
  <1_gnn/1_gcn.html>`__ `[Pytorch code]
  <https://github.com/dmlc/dgl/blob/master/examples/pytorch/gcn>`__
  `[MXNet code]
  <https://github.com/dmlc/dgl/tree/master/examples/mxnet/gcn>`__:
11
12
  this is the vanilla GCN. The tutorial covers the basic uses of DGL APIs.

Hao Zhang's avatar
Hao Zhang committed
13
14
* **GAT** `[paper] <https://arxiv.org/abs/1710.10903>`__ `[tutorial]
  <1_gnn/9_gat.html>`__ `[Pytorch code]
Da Zheng's avatar
Da Zheng committed
15
16
17
  <https://github.com/dmlc/dgl/blob/master/examples/pytorch/gat>`__
  `[MXNet code]
  <https://github.com/dmlc/dgl/tree/master/examples/mxnet/gat>`__:
18
19
20
21
  the key extension of GAT w.r.t vanilla GCN is deploying multi-head attention
  among neighborhood of a node, thus greatly enhances the capacity and
  expressiveness of the model.

22
* **R-GCN** `[paper] <https://arxiv.org/abs/1703.06103>`__ `[tutorial]
Da Zheng's avatar
Da Zheng committed
23
24
25
26
  <1_gnn/4_rgcn.html>`__ `[Pytorch code]
  <https://github.com/dmlc/dgl/tree/master/examples/pytorch/rgcn>`__
  `[MXNet code]
  <https://github.com/dmlc/dgl/tree/master/examples/mxnet/rgcn>`__:
27
28
29
30
  the key difference of RGNN is to allow multi-edges among two entities of a
  graph, and edges with distinct relationships are encoded differently. This
  is an interesting extension of GCN that can have a lot of applications of
  its own.
31

32
* **LGNN** `[paper] <https://arxiv.org/abs/1705.08415>`__ `[tutorial]
Da Zheng's avatar
Da Zheng committed
33
  <1_gnn/6_line_graph.html>`__ `[Pytorch code]
Minjie Wang's avatar
Minjie Wang committed
34
  <https://github.com/dmlc/dgl/tree/master/examples/pytorch/line_graph>`__:
35
  this model focuses on community detection by inspecting graph structures. It
36
37
38
39
40
  uses representations of both the original graph and its line-graph
  companion. In addition to demonstrate how an algorithm can harness multiple
  graphs, our implementation shows how one can judiciously mix vanilla tensor
  operation, sparse-matrix tensor operations, along with message-passing with
  DGL.
41

42
* **SSE** `[paper] <http://proceedings.mlr.press/v80/dai18a/dai18a.pdf>`__ `[tutorial]
Da Zheng's avatar
Da Zheng committed
43
  <1_gnn/8_sse_mx.html>`__ `[MXNet code]
Minjie Wang's avatar
Minjie Wang committed
44
  <https://github.com/dmlc/dgl/blob/master/examples/mxnet/sse>`__:
45
  the emphasize here is *giant* graph that cannot fit comfortably on one GPU
46
47
48
  card. SSE is an example to illustrate the co-design of both algorithm and
  system: sampling to guarantee asymptotic convergence while lowering the
  complexity, and batching across samples for maximum parallelism.