Unverified Commit 65866989 authored by Tianjun Xiao's avatar Tianjun Xiao Committed by GitHub
Browse files

[NN] [Doc] Fix nn api doc (#2047)

* fix mxnet nn doc and no note on chebconv

* change notes to note to highlight
parent dacc7afe
...@@ -47,8 +47,8 @@ class SGConv(nn.Module): ...@@ -47,8 +47,8 @@ class SGConv(nn.Module):
0-in-degree nodes in input graph. By setting ``True``, it will suppress the check 0-in-degree nodes in input graph. By setting ``True``, it will suppress the check
and let the users handle it by themselves. Default: ``False``. and let the users handle it by themselves. Default: ``False``.
Notes Note
----- ----
Zero in-degree nodes will lead to invalid output value. This is because no message Zero in-degree nodes will lead to invalid output value. This is because no message
will be passed to those nodes, the aggregation function will be appied on empty input. will be passed to those nodes, the aggregation function will be appied on empty input.
A common practice to avoid this is to add a self-loop for each node in the graph if A common practice to avoid this is to add a self-loop for each node in the graph if
...@@ -107,8 +107,8 @@ class SGConv(nn.Module): ...@@ -107,8 +107,8 @@ class SGConv(nn.Module):
----------- -----------
Reinitialize learnable parameters. Reinitialize learnable parameters.
Notes Note
----- ----
The model parameters are initialized using xavier initialization The model parameters are initialized using xavier initialization
and the bias is initialized to be zero. and the bias is initialized to be zero.
""" """
...@@ -158,8 +158,8 @@ class SGConv(nn.Module): ...@@ -158,8 +158,8 @@ class SGConv(nn.Module):
since no message will be passed to those nodes. This will cause invalid output. since no message will be passed to those nodes. This will cause invalid output.
The error can be ignored by setting ``allow_zero_in_degree`` parameter to ``True``. The error can be ignored by setting ``allow_zero_in_degree`` parameter to ``True``.
Notes Note
----- ----
If ``cache`` is set to True, ``feat`` and ``graph`` should not change during If ``cache`` is set to True, ``feat`` and ``graph`` should not change during
training, or you will get wrong results. training, or you will get wrong results.
""" """
......
...@@ -82,8 +82,8 @@ class TAGConv(nn.Module): ...@@ -82,8 +82,8 @@ class TAGConv(nn.Module):
----------- -----------
Reinitialize learnable parameters. Reinitialize learnable parameters.
Notes Note
----- ----
The model parameters are initialized using Glorot uniform initialization. The model parameters are initialized using Glorot uniform initialization.
""" """
gain = nn.init.calculate_gain('relu') gain = nn.init.calculate_gain('relu')
......
...@@ -44,10 +44,6 @@ class ChebConv(layers.Layer): ...@@ -44,10 +44,6 @@ class ChebConv(layers.Layer):
bias : bool, optional bias : bool, optional
If True, adds a learnable bias to the output. Default: ``True``. If True, adds a learnable bias to the output. Default: ``True``.
Note
----
ChebConv only support DGLGraph as input for now. Heterograph will report error. To be fixed.
Example Example
------- -------
>>> import dgl >>> import dgl
......
...@@ -29,30 +29,6 @@ class DenseChebConv(layers.Layer): ...@@ -29,30 +29,6 @@ class DenseChebConv(layers.Layer):
bias : bool, optional bias : bool, optional
If True, adds a learnable bias to the output. Default: ``True``. If True, adds a learnable bias to the output. Default: ``True``.
Example
-------
>>> import dgl
>>> import numpy as np
>>> import tensorflow as tf
>>> from dgl.nn import DenseChebConv
>>>
>>> feat = tf.ones(6, 10)
>>> adj = tf.tensor([[0., 0., 1., 0., 0., 0.],
... [1., 0., 0., 0., 0., 0.],
... [0., 1., 0., 0., 0., 0.],
... [0., 0., 1., 0., 0., 1.],
... [0., 0., 0., 1., 0., 0.],
... [0., 0., 0., 0., 0., 0.]])
>>> conv = DenseChebConv(10, 2, 2)
>>> res = conv(adj, feat)
>>> res
tensor([[-3.3516, -2.4797],
[-3.3516, -2.4797],
[-3.3516, -2.4797],
[-4.5192, -3.0835],
[-2.5259, -2.0527],
[-0.5327, -1.0219]])
See also See also
-------- --------
`ChebConv <https://docs.dgl.ai/api/python/nn.tensorflow.html#chebconv>`__ `ChebConv <https://docs.dgl.ai/api/python/nn.tensorflow.html#chebconv>`__
......
...@@ -63,8 +63,8 @@ class GATConv(layers.Layer): ...@@ -63,8 +63,8 @@ class GATConv(layers.Layer):
0-in-degree nodes in input graph. By setting ``True``, it will suppress the check 0-in-degree nodes in input graph. By setting ``True``, it will suppress the check
and let the users handle it by themselves. Defaults: ``False``. and let the users handle it by themselves. Defaults: ``False``.
Notes Note
----- ----
Zero in-degree nodes will lead to invalid output value. This is because no message Zero in-degree nodes will lead to invalid output value. This is because no message
will be passed to those nodes, the aggregation function will be appied on empty input. will be passed to those nodes, the aggregation function will be appied on empty input.
A common practice to avoid this is to add a self-loop for each node in the graph if A common practice to avoid this is to add a self-loop for each node in the graph if
......
...@@ -60,8 +60,8 @@ class GraphConv(layers.Layer): ...@@ -60,8 +60,8 @@ class GraphConv(layers.Layer):
bias : torch.Tensor bias : torch.Tensor
The learnable bias tensor. The learnable bias tensor.
Notes Note
----- ----
Zero in-degree nodes will lead to invalid output value. This is because no message Zero in-degree nodes will lead to invalid output value. This is because no message
will be passed to those nodes, the aggregation function will be appied on empty input. will be passed to those nodes, the aggregation function will be appied on empty input.
A common practice to avoid this is to add a self-loop for each node in the graph if A common practice to avoid this is to add a self-loop for each node in the graph if
...@@ -208,8 +208,8 @@ class GraphConv(layers.Layer): ...@@ -208,8 +208,8 @@ class GraphConv(layers.Layer):
since no message will be passed to those nodes. This will cause invalid output. since no message will be passed to those nodes. This will cause invalid output.
The error can be ignored by setting ``allow_zero_in_degree`` parameter to ``True``. The error can be ignored by setting ``allow_zero_in_degree`` parameter to ``True``.
Notes Note
----- ----
* Input shape: :math:`(N, *, \text{in_feats})` where * means any number of additional * Input shape: :math:`(N, *, \text{in_feats})` where * means any number of additional
dimensions, :math:`N` is the number of nodes. dimensions, :math:`N` is the number of nodes.
* Output shape: :math:`(N, *, \text{out_feats})` where all but the last dimension are * Output shape: :math:`(N, *, \text{out_feats})` where all but the last dimension are
......
...@@ -49,8 +49,8 @@ class SGConv(layers.Layer): ...@@ -49,8 +49,8 @@ class SGConv(layers.Layer):
0-in-degree nodes in input graph. By setting ``True``, it will suppress the check 0-in-degree nodes in input graph. By setting ``True``, it will suppress the check
and let the users handle it by themselves. Default: ``False``. and let the users handle it by themselves. Default: ``False``.
Notes Note
----- ----
Zero in-degree nodes will lead to invalid output value. This is because no message Zero in-degree nodes will lead to invalid output value. This is because no message
will be passed to those nodes, the aggregation function will be appied on empty input. will be passed to those nodes, the aggregation function will be appied on empty input.
A common practice to avoid this is to add a self-loop for each node in the graph if A common practice to avoid this is to add a self-loop for each node in the graph if
...@@ -145,8 +145,8 @@ class SGConv(layers.Layer): ...@@ -145,8 +145,8 @@ class SGConv(layers.Layer):
since no message will be passed to those nodes. This will cause invalid output. since no message will be passed to those nodes. This will cause invalid output.
The error can be ignored by setting ``allow_zero_in_degree`` parameter to ``True``. The error can be ignored by setting ``allow_zero_in_degree`` parameter to ``True``.
Notes Note
----- ----
If ``cache`` is set to True, ``feat`` and ``graph`` should not change during If ``cache`` is set to True, ``feat`` and ``graph`` should not change during
training, or you will get wrong results. training, or you will get wrong results.
""" """
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment