Unverified Commit e16e895d authored by Minjie Wang's avatar Minjie Wang Committed by GitHub
Browse files

[Doc] fix some document warnings (#645)

* fix doc

* fix some format and warnings

* fix
parent 463807c5
...@@ -7,7 +7,7 @@ Graph Store -- Graph for multi-processing and distributed training ...@@ -7,7 +7,7 @@ Graph Store -- Graph for multi-processing and distributed training
.. autoclass:: SharedMemoryDGLGraph .. autoclass:: SharedMemoryDGLGraph
Querying the distributed setting Querying the distributed setting
------------------------ --------------------------------
.. autosummary:: .. autosummary::
:toctree: ../../generated/ :toctree: ../../generated/
...@@ -26,7 +26,7 @@ Using Node/edge features ...@@ -26,7 +26,7 @@ Using Node/edge features
SharedMemoryDGLGraph.init_edata SharedMemoryDGLGraph.init_edata
Computing with Graph store Computing with Graph store
----------------------- --------------------------
.. autosummary:: .. autosummary::
:toctree: ../../generated/ :toctree: ../../generated/
......
.. _apinodeflow: .. _apinodeflow:
NodeFlow -- Graph sampled from a large graph NodeFlow -- Graph sampled from a large graph
========================================= ============================================
.. currentmodule:: dgl .. currentmodule:: dgl
.. autoclass:: NodeFlow .. autoclass:: NodeFlow
......
...@@ -3323,25 +3323,27 @@ class DGLGraph(DGLBaseGraph): ...@@ -3323,25 +3323,27 @@ class DGLGraph(DGLBaseGraph):
# pylint: disable=invalid-name # pylint: disable=invalid-name
def to(self, ctx): def to(self, ctx):
""" """Move both ndata and edata to the targeted mode (cpu/gpu)
Move both ndata and edata to the targeted mode (cpu/gpu)
Framework agnostic Framework agnostic
Parameters Parameters
---------- ----------
ctx : framework specific context object ctx : framework-specific context object
The context to move data to.
Examples (Pytorch & MXNet) Examples
-------- --------
>>> import backend as F The following example uses PyTorch backend.
>>> import torch
>>> G = dgl.DGLGraph() >>> G = dgl.DGLGraph()
>>> G.add_nodes(5, {'h': torch.ones((5, 2))}) >>> G.add_nodes(5, {'h': torch.ones((5, 2))})
>>> G.add_edges([0, 1], [1, 2], {'m' : torch.ones((2, 2))}) >>> G.add_edges([0, 1], [1, 2], {'m' : torch.ones((2, 2))})
>>> G.add_edges([0, 1], [1, 2], {'m' : torch.ones((2, 2))}) >>> G.add_edges([0, 1], [1, 2], {'m' : torch.ones((2, 2))})
>>> G.to(F.cuda()) >>> G.to(torch.device('cuda:0'))
""" """
for k in self.ndata.keys(): for k in self.ndata.keys():
self.ndata[k] = F.copy_to(self.ndata[k], ctx) self.ndata[k] = F.copy_to(self.ndata[k], ctx)
for k in self.edata.keys(): for k in self.edata.keys():
self.edata[k] = F.copy_to(self.edata[k], ctx) self.edata[k] = F.copy_to(self.edata[k], ctx)
# pylint: enable=invalid-name
...@@ -558,12 +558,17 @@ class NodeFlow(DGLBaseGraph): ...@@ -558,12 +558,17 @@ class NodeFlow(DGLBaseGraph):
or not. or not.
There are two types of an incidence matrix `I`: There are two types of an incidence matrix `I`:
* "in":
- I[v, e] = 1 if e is the in-edge of v (or v is the dst node of e); * ``in``:
- I[v, e] = 0 otherwise.
* "out": - I[v, e] = 1 if e is the in-edge of v (or v is the dst node of e);
- I[v, e] = 1 if e is the out-edge of v (or v is the src node of e); - I[v, e] = 0 otherwise.
- I[v, e] = 0 otherwise.
* ``out``:
- I[v, e] = 1 if e is the out-edge of v (or v is the src node of e);
- I[v, e] = 0 otherwise.
"both" isn't defined in the block of a NodeFlow. "both" isn't defined in the block of a NodeFlow.
Parameters Parameters
......
""" """
.. _sampling: .. _model-sampling:
NodeFlow and Sampling NodeFlow and Sampling
======================================= =======================================
......
""" """
.. _sampling: .. _model-graph-store:
Large-Scale Training of Graph Neural Networks Large-Scale Training of Graph Neural Networks
============================================= =============================================
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment