Unverified Commit 453d358d authored by xiang song(charlie.song)'s avatar xiang song(charlie.song) Committed by GitHub
Browse files

[Doc] Fix docs for sparse optimizer (#2680)


Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-56-220.ec2.internal>
Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>
parent 9e04a52a
......@@ -268,6 +268,9 @@ SegmentedKNNGraph
:members:
:show-inheritance:
NodeEmbedding Module
----------------------------------------
NodeEmbedding
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......
......@@ -122,6 +122,7 @@ Getting Started
api/python/dgl.function
api/python/nn
api/python/dgl.ops
api/python/dgl.optim
api/python/dgl.sampling
api/python/udf
......
......@@ -22,6 +22,8 @@ class NodeEmbedding: # NodeEmbedding
``torch.distributed.TCPStore`` to share meta-data information across multiple gpu processes.
It use the local address of '127.0.0.1:12346' to initialize the TCPStore.
NOTE: The support of NodeEmbedding is experimental.
Parameters
----------
num_embeddings : int
......
"""dgl optims for pytorch."""
"""dgl sparse optimizer for pytorch."""
from .sparse_optim import SparseAdagrad, SparseAdam
......@@ -237,6 +237,8 @@ class SparseAdagrad(SparseGradOptimizer):
:math:`G_{t,i,j}=G_{t-1,i,j} + g_{t,i,j}^2` and :math:`g_{t,i,j}` is the gradient of
the dimension :math:`j` of embedding :math:`i` at step :math:`t`.
NOTE: The support of sparse Adagrad optimizer is experimental.
Parameters
----------
params : list[dgl.nn.NodeEmbedding]
......@@ -335,6 +337,8 @@ class SparseAdam(SparseGradOptimizer):
:math:`g_{t,i,j}` is the gradient of the dimension :math:`j` of embedding :math:`i`
at step :math:`t`.
NOTE: The support of sparse Adam optimizer is experimental.
Parameters
----------
params : list[dgl.nn.NodeEmbedding]
......@@ -348,8 +352,8 @@ class SparseAdam(SparseGradOptimizer):
The term added to the denominator to improve numerical stability
Default: 1e-8
Examples:
Examples
--------
>>> def initializer(emb):
th.nn.init.xavier_uniform_(emb)
return emb
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment