Unverified Commit 8f5baa15 authored by Jeremy Goh's avatar Jeremy Goh Committed by GitHub
Browse files

[Doc] Fix spelling, references and update info on building docs (#3682)

* Fix ref to message-passing guide

* Fix pygments and spacing

* Update build documentation steps in README.md

* Use links

* Adjust parameters in SAGEConv docstring in same order as init

* Fix spelling error

* Change doc link
parent dc629fc5
......@@ -3,6 +3,7 @@ DGL document and tutorial folder
Requirements
------------
You need to build DGL locally first (as described [here](https://docs.dgl.ai/install/index.html#install-from-source)), and ensure the following python packages are installed:
* sphinx
* sphinx-gallery
* sphinx_rtd_theme
......
......@@ -6,7 +6,7 @@ dgl.function
==================================
This subpackage hosts all the **built-in functions** provided by DGL. Built-in functions
are DGL's recommended way to express different types of ref:`guide-message-passing` computation
are DGL's recommended way to express different types of :ref:`guide-message-passing` computation
(i.e., via :func:`~dgl.DGLGraph.update_all`) or computing edge-wise features from
node-wise features (i.e., via :func:`~dgl.DGLGraph.apply_edges`). Built-in functions
describe the node-wise and edge-wise computation in a symbolic way without any
......@@ -14,7 +14,7 @@ actual computation, so DGL can analyze and map them to efficient low-level kerne
Here are some examples:
.. code:: python
import dgl
import dgl.function as fn
import torch as th
......@@ -55,7 +55,7 @@ following user-defined function:
def udf_max(nodes):
return {'h_max' : th.max(nodes.mailbox['m'], 1)[0]}
All binary message function supports **broadcasting**, a mechansim for extending element-wise
All binary message function supports **broadcasting**, a mechanism for extending element-wise
operations to tensor inputs with different shapes. DGL generally follows the standard
broadcasting semantic by `NumPy <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>`_
and `PyTorch <https://pytorch.org/docs/stable/notes/broadcasting.html>`_. Below are some
......
......@@ -59,5 +59,5 @@ respectively:
The above two implementations are mathematically equivalent. The latter
one is more efficient because it does not need to save feat_src and
feat_dst on edges, which is not memory-efficient. Plus, addition could
be optimized with DGL’s built-in function ``u_add_v``, which further
be optimized with DGL’s built-in function :func:`~dgl.function.u_add_v`, which further
speeds up computation and saves memory footprint.
......@@ -10,7 +10,7 @@ case, you would like to have an *edge classification/regression* model.
Here we generate a random graph for edge prediction as a demonstration.
.. code:: ipython3
.. code:: python
src = np.random.randint(0, 100, 500)
dst = np.random.randint(0, 100, 500)
......@@ -328,5 +328,3 @@ file <https://github.com/dmlc/dgl/tree/master/examples/pytorch/gcmc>`__
is called ``GCMCLayer``. The edge type predictor module is called
``BiDecoder``. Both of them are more complicated than the setting
described here.
......@@ -9,7 +9,7 @@
以下代码生成了一个随机图用于演示边分类/回归。
.. code:: ipython3
.. code:: python
src = np.random.randint(0, 100, 500)
dst = np.random.randint(0, 100, 500)
......@@ -59,13 +59,13 @@
def __init__(self, in_features, out_classes):
super().__init__()
self.W = nn.Linear(in_features * 2, out_classes)
def apply_edges(self, edges):
h_u = edges.src['h']
h_v = edges.dst['h']
score = self.W(torch.cat([h_u, h_v], 1))
return {'score': score}
def forward(self, graph, h):
# h是从5.1节的GNN模型中计算出的节点表示
with graph.local_scope():
......@@ -136,13 +136,13 @@
def __init__(self, in_features, out_classes):
super().__init__()
self.W = nn.Linear(in_features * 2, out_classes)
def apply_edges(self, edges):
h_u = edges.src['h']
h_v = edges.dst['h']
score = self.W(torch.cat([h_u, h_v], 1))
return {'score': score}
def forward(self, graph, h, etype):
# h是从5.1节中对异构图的每种类型的边所计算的节点表示
with graph.local_scope():
......@@ -229,12 +229,12 @@
def __init__(self, in_dims, n_classes):
super().__init__()
self.W = nn.Linear(in_dims * 2, n_classes)
def apply_edges(self, edges):
x = torch.cat([edges.src['h'], edges.dst['h']], 1)
y = self.W(x)
return {'score': y}
def forward(self, graph, h):
# h是从5.1节中对异构图的每种类型的边所计算的节点表示
with graph.local_scope():
......@@ -263,7 +263,7 @@
user_feats = hetero_graph.nodes['user'].data['feature']
item_feats = hetero_graph.nodes['item'].data['feature']
node_features = {'user': user_feats, 'item': item_feats}
opt = torch.optim.Adam(model.parameters())
for epoch in range(10):
logits = model(hetero_graph, node_features, dec_graph)
......
......@@ -7,9 +7,9 @@
때론 그래프의 에지들의 속성을 예측을 원하는 경우가 있다. 이를 위해서 *에지 분류/리그레션* 모델을 만들고자 한다.
우선, 예제로 사용할 에지 예측을 위한 임의의 그래프를 만든다.
우선, 예제로 사용할 에지 예측을 위한 임의의 그래프를 만든다.
.. code:: ipython3
.. code:: python
src = np.random.randint(0, 100, 500)
dst = np.random.randint(0, 100, 500)
......@@ -270,4 +270,4 @@ Heterogeneous 그래프의 에지들에 대한 에지 타입 예측하기
opt.step()
print(loss.item())
DGL heterogeneous 그래프의 에지들에 대한 타입을 예측하는 문제인 평가 예측 예제로 `Graph Convolutional Matrix Completion <https://github.com/dmlc/dgl/tree/master/examples/pytorch/gcmc>`__ 제공한다. `모델 구현 파일 <https://github.com/dmlc/dgl/tree/master/examples/pytorch/gcmc>`__ 있는 노드 representation 모듈은 ``GCMCLayer`` 라고 불린다. 둘은 여기서 설명하기에는 너무 복잡하니 자세한 설명은 생략한다.
\ No newline at end of file
DGL heterogeneous 그래프의 에지들에 대한 타입을 예측하는 문제인 평가 예측 예제로 `Graph Convolutional Matrix Completion <https://github.com/dmlc/dgl/tree/master/examples/pytorch/gcmc>`__ 제공한다. `모델 구현 파일 <https://github.com/dmlc/dgl/tree/master/examples/pytorch/gcmc>`__ 있는 노드 representation 모듈은 ``GCMCLayer`` 라고 불린다. 둘은 여기서 설명하기에는 너무 복잡하니 자세한 설명은 생략한다.
......@@ -229,7 +229,7 @@ def src_mul_edge(src, edge, out):
Notes
-----
This function is deprecated. Please use u_mul_e instead.
This function is deprecated. Please use :func:`~dgl.function.u_mul_e` instead.
Parameters
----------
......@@ -254,7 +254,7 @@ def copy_src(src, out):
Notes
-----
This function is deprecated. Please use copy_u instead.
This function is deprecated. Please use :func:`~dgl.function.copy_u` instead.
Parameters
----------
......@@ -281,7 +281,7 @@ def copy_edge(edge, out):
Notes
-----
This function is deprecated. Please use copy_e instead.
This function is deprecated. Please use :func:`~dgl.function.copy_e` instead.
Parameters
----------
......
......@@ -41,10 +41,10 @@ class SAGEConv(nn.Block):
are required to be the same.
out_feats : int
Output feature size; i.e, the number of dimensions of :math:`h_i^{(l+1)}`.
feat_drop : float
Dropout rate on features, default: ``0``.
aggregator_type : str
Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``).
feat_drop : float
Dropout rate on features, default: ``0``.
bias : bool
If True, adds a learnable bias to the output. Default: ``True``.
norm : callable activation function/layer or None, optional
......
......@@ -50,10 +50,10 @@ class SAGEConv(nn.Module):
are required to be the same.
out_feats : int
Output feature size; i.e, the number of dimensions of :math:`h_i^{(l+1)}`.
feat_drop : float
Dropout rate on features, default: ``0``.
aggregator_type : str
Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``).
feat_drop : float
Dropout rate on features, default: ``0``.
bias : bool
If True, adds a learnable bias to the output. Default: ``True``.
norm : callable activation function/layer or None, optional
......@@ -199,7 +199,7 @@ class SAGEConv(nn.Module):
torch.Tensor
The output feature of shape :math:`(N_{dst}, D_{out})`
where :math:`N_{dst}` is the number of destination nodes in the input graph,
math:`D_{out}` is size of output feature.
:math:`D_{out}` is the size of the output feature.
"""
self._compatibility_check()
with graph.local_scope():
......
......@@ -40,10 +40,10 @@ class SAGEConv(layers.Layer):
are required to be the same.
out_feats : int
Output feature size; i.e, the number of dimensions of :math:`h_i^{(l+1)}`.
feat_drop : float
Dropout rate on features, default: ``0``.
aggregator_type : str
Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``).
feat_drop : float
Dropout rate on features, default: ``0``.
bias : bool
If True, adds a learnable bias to the output. Default: ``True``.
norm : callable activation function/layer or None, optional
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment