"examples/mxnet/git@developer.sourcefind.cn:OpenDAS/dgl.git" did not exist on "dec8b49b5df9428bda561f82780b2d73f4589ea9"
Unverified Commit 740cd706 authored by Rhett Ying's avatar Rhett Ying Committed by GitHub
Browse files

[Doc]Fix doc typo in DistEmbedding (#4258)

* [Doc] fix docstring typo

* Update sparse_emb.py

* Update sparse_emb.py

* update link
parent 9a7ad16e
...@@ -16,11 +16,11 @@ class DistEmbedding: ...@@ -16,11 +16,11 @@ class DistEmbedding:
To support efficient training on a graph with many nodes, the embeddings support sparse To support efficient training on a graph with many nodes, the embeddings support sparse
updates. That is, only the embeddings involved in a mini-batch computation are updated. updates. That is, only the embeddings involved in a mini-batch computation are updated.
Currently, DGL provides only one optimizer: `SparseAdagrad`. DGL will provide more Please refer to `Distributed Optimizers <https://docs.dgl.ai/api/python/dgl.distributed.html#
optimizers in the future. distributed-embedding-optimizer>`__ for available optimizers in DGL.
Distributed embeddings are sharded and stored in a cluster of machines in the same way as Distributed embeddings are sharded and stored in a cluster of machines in the same way as
py:meth:`dgl.distributed.DistTensor`, except that distributed embeddings are trainable. :class:`dgl.distributed.DistTensor`, except that distributed embeddings are trainable.
Because distributed embeddings are sharded Because distributed embeddings are sharded
in the same way as nodes and edges of a distributed graph, it is usually much more in the same way as nodes and edges of a distributed graph, it is usually much more
efficient to access than the sparse embeddings provided by the deep learning frameworks. efficient to access than the sparse embeddings provided by the deep learning frameworks.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment