Unverified Commit cab1fdf2 authored by Minjie Wang's avatar Minjie Wang Committed by GitHub
Browse files

[Doc] doc update (#158)

* minor fix

* jake's change
parent 79a51025
...@@ -4,22 +4,36 @@ ...@@ -4,22 +4,36 @@
DGL at a glance DGL at a glance
========================= =========================
**Author**: Minjie Wang, Quan Gan, Zheng Zhang **Author**: `Minjie Wang <https://jermainewang.github.io/>`_, Quan Gan, `Jake
Zhao <https://cs.nyu.edu/~jakezhao/>`_, Zheng Zhang
The goal of DGL is to build, train, and deploy *machine learning models* The goal of this tutorial:
on *graph-structured data*. To achieve this, DGL provides a :class:`DGLGraph`
class that defines the graph structure and the information on its nodes
and edges. It also provides a set of feature transformation methods
and message passing methods to propagate information between nodes and edges.
Goal of this tutorial: get a feeling of how DGL looks like! - Understand how DGL builds a graph from a high level.
- Perform simple computation on graphs.
At the end of this tutorial, we hope you get a brief feeling of how DGL works.
""" """
###############################################################################
# Why DGL?
# ----------------
# DGL is designed to bring **machine learning** closer to **graph-structured
# data**. Specifically DGL enables trouble-free implementation of graph neural
# network (GNN) model family. Unlike PyTorch or Tensorflow, DGL provides
# friendly APIs to perform the fundamental operations in GNNs such as message
# passing and reduction. Through DGL, we hope to benefit both researchers
# trying out new ideas and engineers in production.
#
# *This tutorial assumes basic familiarity with networkx.*
############################################################################### ###############################################################################
# Building a graph # Building a graph
# ---------------- # ----------------
# Let's build a toy graph with two nodes and throw some representations on the #
# nodes and edges: # A graph is built using :class:`~dgl.DGLGraph` class.
# Here as a toy example, we define a toy graph with two nodes then assign
# features on nodes and edges:
import torch as th import torch as th
import networkx as nx import networkx as nx
...@@ -39,12 +53,13 @@ def a_boring_graph(): ...@@ -39,12 +53,13 @@ def a_boring_graph():
return g return g
############################################################################### ###############################################################################
# We can also convert from networkx: # We can also convert a graph defined by `networkx
# <https://networkx.github.io/documentation/stable/>`_ to DGL:
def an_interesting_graph(): def an_interesting_graph():
import networkx as nx import networkx as nx
N = 100 N = 70
g = nx.erdos_renyi_graph(N, 0.1) g = nx.erdos_renyi_graph(N, 0.1)
g = dgl.DGLGraph(g) g = dgl.DGLGraph(g)
...@@ -56,7 +71,7 @@ def an_interesting_graph(): ...@@ -56,7 +71,7 @@ def an_interesting_graph():
return g return g
############################################################################### ###############################################################################
# One thing to be aware of is that :class:`DGLGraph` is directional: # By default, DGLGraph object is directional:
g_boring = a_boring_graph() g_boring = a_boring_graph()
g_better = an_interesting_graph() g_better = an_interesting_graph()
...@@ -68,14 +83,31 @@ plt.show() ...@@ -68,14 +83,31 @@ plt.show()
############################################################################### ###############################################################################
# Define Computation # Define Computation
# ------------------ # ------------------
# The focus of DGL is to provide a way to integrate representation learning # The canonical functionality of DGL is to provide efficient message passing
# (using neural networks) with graph data. The way we do it is with a # and merging on graphs. It is implemented by using a message passing interface
# message-passing interface with scatter-gather paradigm. (i.e. a mailbox metaphor). # powered by the scatter-gather paradigm (i.e. a mailbox metaphor).
#
# To give an intuitive example, suppose we have one node :math:`v` , together with
# many incoming edges: :math:`e_i\in\mathcal{N}(v)`. Each node and edge is
# tagged with their own feature. Now, we can perform one iteration of message
# passing and merging by the following routine:
#
# - Each edge :math:`e_i` passes the information along into the node :math:`v`, by
# ``send_source``.
# - A ``reduce`` operation is triggered to gather these messages
# sent from the edges, by ``simple_reduce``.
# - ``readout`` function is called eventually to yield the updated feature on
# :math:`v`.
#
# A graphical demonstration is displayed below, followed by a complete
# implementation.
# #
# .. note:: # .. image:: https://drive.google.com/uc?export=view&id=1rc9cR0Iw96m_wjS55V9LJOJ4RpQBja15
# :height: 300px
# :width: 400px
# :alt: mailbox
# :align: center
# #
# For people familiar with graph convolutional network, it is easy to see the
# pattern here.
def super_useful_comp(g): def super_useful_comp(g):
...@@ -98,8 +130,7 @@ def super_useful_comp(g): ...@@ -98,8 +130,7 @@ def super_useful_comp(g):
return readout(g) return readout(g)
############################################################################### ###############################################################################
# The point is, regardless of what kind of graphs and the form of representations, # See the python wrapper:
# DGL handles it uniformly and efficiently.
g_boring = a_boring_graph() g_boring = a_boring_graph()
graph_sum = super_useful_comp(g_boring) graph_sum = super_useful_comp(g_boring)
...@@ -112,5 +143,5 @@ print("graph sum is: ", graph_sum) ...@@ -112,5 +143,5 @@ print("graph sum is: ", graph_sum)
############################################################################### ###############################################################################
# Next steps # Next steps
# ---------- # ----------
# In the :doc:`next tutorial <2_basics>`, we will go through defining # In the :doc:`next tutorial <2_basics>`, we will go through some more basics
# a graph structure, as well as reading and writing node/edge representations. # of DGL, such as reading and writing node/edge features.
...@@ -4,7 +4,8 @@ ...@@ -4,7 +4,8 @@
DGL Basics DGL Basics
========== ==========
**Author**: Minjie Wang, Quan Gan, Yu Gai, Zheng Zhang **Author**: `Minjie Wang <https://jermainewang.github.io/>`_, Quan Gan, Yu Gai,
Zheng Zhang
The Goal of this tutorial: The Goal of this tutorial:
......
...@@ -4,7 +4,8 @@ ...@@ -4,7 +4,8 @@
PageRank with DGL Message Passing PageRank with DGL Message Passing
================================= =================================
**Author**: Minjie Wang, Quan Gan, Yu Gai, Zheng Zhang **Author**: `Minjie Wang <https://jermainewang.github.io/>`_, Quan Gan, Yu Gai,
Zheng Zhang
In this section we illustrate the usage of different levels of message In this section we illustrate the usage of different levels of message
passing API with PageRank on a small graph. In DGL, the message passing and passing API with PageRank on a small graph. In DGL, the message passing and
......
...@@ -4,7 +4,8 @@ ...@@ -4,7 +4,8 @@
Capsule Network Tutorial Capsule Network Tutorial
=========================== ===========================
**Author**: `Jinjing Zhou`, `Zheng Zhang` **Author**: Jinjing Zhou, `Jake
Zhao <https://cs.nyu.edu/~jakezhao/>`_, Zheng Zhang
It is perhaps a little surprising that some of the more classical models can also be described in terms of graphs, It is perhaps a little surprising that some of the more classical models can also be described in terms of graphs,
offering a different perspective. offering a different perspective.
......
...@@ -4,8 +4,9 @@ ...@@ -4,8 +4,9 @@
Tree LSTM DGL Tutorial Tree LSTM DGL Tutorial
========================= =========================
**Author**: `Zihao Ye`, `Qipeng Guo`, `Minjie Wang`, `Zheng Zhang` **Author**: Zihao Ye, Qipeng Guo, `Minjie Wang
<https://jermainewang.github.io/>`_, `Jake Zhao
<https://cs.nyu.edu/~jakezhao/>`_, Zheng Zhang
""" """
############################################################################## ##############################################################################
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment