"torchvision/csrc/cpu/decoder/util_test.cpp" did not exist on "8b9859d3aeebcd37e6a284fc751c58569857f7be"
Unverified Commit 8f5baa15 authored by Jeremy Goh's avatar Jeremy Goh Committed by GitHub
Browse files

[Doc] Fix spelling, references and update info on building docs (#3682)

* Fix ref to message-passing guide

* Fix pygments and spacing

* Update build documentation steps in README.md

* Use links

* Adjust parameters in SAGEConv docstring in same order as init

* Fix spelling error

* Change doc link
parent dc629fc5
...@@ -3,6 +3,7 @@ DGL document and tutorial folder ...@@ -3,6 +3,7 @@ DGL document and tutorial folder
Requirements Requirements
------------ ------------
You need to build DGL locally first (as described [here](https://docs.dgl.ai/install/index.html#install-from-source)), and ensure the following python packages are installed:
* sphinx * sphinx
* sphinx-gallery * sphinx-gallery
* sphinx_rtd_theme * sphinx_rtd_theme
......
...@@ -6,7 +6,7 @@ dgl.function ...@@ -6,7 +6,7 @@ dgl.function
================================== ==================================
This subpackage hosts all the **built-in functions** provided by DGL. Built-in functions This subpackage hosts all the **built-in functions** provided by DGL. Built-in functions
are DGL's recommended way to express different types of ref:`guide-message-passing` computation are DGL's recommended way to express different types of :ref:`guide-message-passing` computation
(i.e., via :func:`~dgl.DGLGraph.update_all`) or computing edge-wise features from (i.e., via :func:`~dgl.DGLGraph.update_all`) or computing edge-wise features from
node-wise features (i.e., via :func:`~dgl.DGLGraph.apply_edges`). Built-in functions node-wise features (i.e., via :func:`~dgl.DGLGraph.apply_edges`). Built-in functions
describe the node-wise and edge-wise computation in a symbolic way without any describe the node-wise and edge-wise computation in a symbolic way without any
...@@ -55,7 +55,7 @@ following user-defined function: ...@@ -55,7 +55,7 @@ following user-defined function:
def udf_max(nodes): def udf_max(nodes):
return {'h_max' : th.max(nodes.mailbox['m'], 1)[0]} return {'h_max' : th.max(nodes.mailbox['m'], 1)[0]}
All binary message function supports **broadcasting**, a mechansim for extending element-wise All binary message function supports **broadcasting**, a mechanism for extending element-wise
operations to tensor inputs with different shapes. DGL generally follows the standard operations to tensor inputs with different shapes. DGL generally follows the standard
broadcasting semantic by `NumPy <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>`_ broadcasting semantic by `NumPy <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>`_
and `PyTorch <https://pytorch.org/docs/stable/notes/broadcasting.html>`_. Below are some and `PyTorch <https://pytorch.org/docs/stable/notes/broadcasting.html>`_. Below are some
......
...@@ -59,5 +59,5 @@ respectively: ...@@ -59,5 +59,5 @@ respectively:
The above two implementations are mathematically equivalent. The latter The above two implementations are mathematically equivalent. The latter
one is more efficient because it does not need to save feat_src and one is more efficient because it does not need to save feat_src and
feat_dst on edges, which is not memory-efficient. Plus, addition could feat_dst on edges, which is not memory-efficient. Plus, addition could
be optimized with DGL’s built-in function ``u_add_v``, which further be optimized with DGL’s built-in function :func:`~dgl.function.u_add_v`, which further
speeds up computation and saves memory footprint. speeds up computation and saves memory footprint.
...@@ -10,7 +10,7 @@ case, you would like to have an *edge classification/regression* model. ...@@ -10,7 +10,7 @@ case, you would like to have an *edge classification/regression* model.
Here we generate a random graph for edge prediction as a demonstration. Here we generate a random graph for edge prediction as a demonstration.
.. code:: ipython3 .. code:: python
src = np.random.randint(0, 100, 500) src = np.random.randint(0, 100, 500)
dst = np.random.randint(0, 100, 500) dst = np.random.randint(0, 100, 500)
...@@ -328,5 +328,3 @@ file <https://github.com/dmlc/dgl/tree/master/examples/pytorch/gcmc>`__ ...@@ -328,5 +328,3 @@ file <https://github.com/dmlc/dgl/tree/master/examples/pytorch/gcmc>`__
is called ``GCMCLayer``. The edge type predictor module is called is called ``GCMCLayer``. The edge type predictor module is called
``BiDecoder``. Both of them are more complicated than the setting ``BiDecoder``. Both of them are more complicated than the setting
described here. described here.
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
以下代码生成了一个随机图用于演示边分类/回归。 以下代码生成了一个随机图用于演示边分类/回归。
.. code:: ipython3 .. code:: python
src = np.random.randint(0, 100, 500) src = np.random.randint(0, 100, 500)
dst = np.random.randint(0, 100, 500) dst = np.random.randint(0, 100, 500)
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
우선, 예제로 사용할 에지 예측을 위한 임의의 그래프를 만든다. 우선, 예제로 사용할 에지 예측을 위한 임의의 그래프를 만든다.
.. code:: ipython3 .. code:: python
src = np.random.randint(0, 100, 500) src = np.random.randint(0, 100, 500)
dst = np.random.randint(0, 100, 500) dst = np.random.randint(0, 100, 500)
......
...@@ -229,7 +229,7 @@ def src_mul_edge(src, edge, out): ...@@ -229,7 +229,7 @@ def src_mul_edge(src, edge, out):
Notes Notes
----- -----
This function is deprecated. Please use u_mul_e instead. This function is deprecated. Please use :func:`~dgl.function.u_mul_e` instead.
Parameters Parameters
---------- ----------
...@@ -254,7 +254,7 @@ def copy_src(src, out): ...@@ -254,7 +254,7 @@ def copy_src(src, out):
Notes Notes
----- -----
This function is deprecated. Please use copy_u instead. This function is deprecated. Please use :func:`~dgl.function.copy_u` instead.
Parameters Parameters
---------- ----------
...@@ -281,7 +281,7 @@ def copy_edge(edge, out): ...@@ -281,7 +281,7 @@ def copy_edge(edge, out):
Notes Notes
----- -----
This function is deprecated. Please use copy_e instead. This function is deprecated. Please use :func:`~dgl.function.copy_e` instead.
Parameters Parameters
---------- ----------
......
...@@ -41,10 +41,10 @@ class SAGEConv(nn.Block): ...@@ -41,10 +41,10 @@ class SAGEConv(nn.Block):
are required to be the same. are required to be the same.
out_feats : int out_feats : int
Output feature size; i.e, the number of dimensions of :math:`h_i^{(l+1)}`. Output feature size; i.e, the number of dimensions of :math:`h_i^{(l+1)}`.
feat_drop : float
Dropout rate on features, default: ``0``.
aggregator_type : str aggregator_type : str
Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``). Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``).
feat_drop : float
Dropout rate on features, default: ``0``.
bias : bool bias : bool
If True, adds a learnable bias to the output. Default: ``True``. If True, adds a learnable bias to the output. Default: ``True``.
norm : callable activation function/layer or None, optional norm : callable activation function/layer or None, optional
......
...@@ -50,10 +50,10 @@ class SAGEConv(nn.Module): ...@@ -50,10 +50,10 @@ class SAGEConv(nn.Module):
are required to be the same. are required to be the same.
out_feats : int out_feats : int
Output feature size; i.e, the number of dimensions of :math:`h_i^{(l+1)}`. Output feature size; i.e, the number of dimensions of :math:`h_i^{(l+1)}`.
feat_drop : float
Dropout rate on features, default: ``0``.
aggregator_type : str aggregator_type : str
Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``). Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``).
feat_drop : float
Dropout rate on features, default: ``0``.
bias : bool bias : bool
If True, adds a learnable bias to the output. Default: ``True``. If True, adds a learnable bias to the output. Default: ``True``.
norm : callable activation function/layer or None, optional norm : callable activation function/layer or None, optional
...@@ -199,7 +199,7 @@ class SAGEConv(nn.Module): ...@@ -199,7 +199,7 @@ class SAGEConv(nn.Module):
torch.Tensor torch.Tensor
The output feature of shape :math:`(N_{dst}, D_{out})` The output feature of shape :math:`(N_{dst}, D_{out})`
where :math:`N_{dst}` is the number of destination nodes in the input graph, where :math:`N_{dst}` is the number of destination nodes in the input graph,
math:`D_{out}` is size of output feature. :math:`D_{out}` is the size of the output feature.
""" """
self._compatibility_check() self._compatibility_check()
with graph.local_scope(): with graph.local_scope():
......
...@@ -40,10 +40,10 @@ class SAGEConv(layers.Layer): ...@@ -40,10 +40,10 @@ class SAGEConv(layers.Layer):
are required to be the same. are required to be the same.
out_feats : int out_feats : int
Output feature size; i.e, the number of dimensions of :math:`h_i^{(l+1)}`. Output feature size; i.e, the number of dimensions of :math:`h_i^{(l+1)}`.
feat_drop : float
Dropout rate on features, default: ``0``.
aggregator_type : str aggregator_type : str
Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``). Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``).
feat_drop : float
Dropout rate on features, default: ``0``.
bias : bool bias : bool
If True, adds a learnable bias to the output. Default: ``True``. If True, adds a learnable bias to the output. Default: ``True``.
norm : callable activation function/layer or None, optional norm : callable activation function/layer or None, optional
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment