Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
4375c2d7
Unverified
Commit
4375c2d7
authored
Apr 16, 2022
by
Quan (Andy) Gan
Committed by
GitHub
Apr 16, 2022
Browse files
[Doc] Fix documentation in dgl.multiprocessing namespace (#3929)
* fix docs * remove * oh * fix
parent
f5bba284
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
4 additions
and
7 deletions
+4
-7
docs/source/api/python/dgl.multiprocessing.rst
docs/source/api/python/dgl.multiprocessing.rst
+0
-1
python/dgl/multiprocessing/__init__.py
python/dgl/multiprocessing/__init__.py
+1
-1
python/dgl/multiprocessing/pytorch.py
python/dgl/multiprocessing/pytorch.py
+3
-5
No files found.
docs/source/api/python/dgl.multiprocessing.rst
View file @
4375c2d7
...
@@ -17,6 +17,5 @@ In addition, if your backend is PyTorch, this module will also be compatible wit
...
@@ -17,6 +17,5 @@ In addition, if your backend is PyTorch, this module will also be compatible wit
.. autosummary::
.. autosummary::
:toctree: ../../generated/
:toctree: ../../generated/
spawn
call_once_and_share
call_once_and_share
shared_tensor
shared_tensor
python/dgl/multiprocessing/__init__.py
View file @
4375c2d7
...
@@ -9,7 +9,7 @@ from .. import backend as F
...
@@ -9,7 +9,7 @@ from .. import backend as F
if
F
.
get_preferred_backend
()
==
'pytorch'
:
if
F
.
get_preferred_backend
()
==
'pytorch'
:
# Wrap around torch.multiprocessing...
# Wrap around torch.multiprocessing...
from
torch.multiprocessing
import
*
from
torch.multiprocessing
import
*
# ... and override the Process initializer
and spawn function
.
# ... and override the Process initializer.
from
.pytorch
import
*
from
.pytorch
import
*
else
:
else
:
# Just import multiprocessing module.
# Just import multiprocessing module.
...
...
python/dgl/multiprocessing/pytorch.py
View file @
4375c2d7
...
@@ -44,10 +44,8 @@ def _get_shared_mem_name(id_):
...
@@ -44,10 +44,8 @@ def _get_shared_mem_name(id_):
return
"shared"
+
str
(
id_
)
return
"shared"
+
str
(
id_
)
def
call_once_and_share
(
func
,
shape
,
dtype
,
rank
=
0
):
def
call_once_and_share
(
func
,
shape
,
dtype
,
rank
=
0
):
"""Invoke the function in a single process of the process group spawned by
"""Invoke the function in a single process of the PyTorch distributed process group,
:func:`spawn`, and share the result to other processes.
and share the result with other processes.
Requires the subprocesses to be spawned with :func:`dgl.multiprocessing.pytorch.spawn`.
Parameters
Parameters
----------
----------
...
@@ -89,7 +87,7 @@ def call_once_and_share(func, shape, dtype, rank=0):
...
@@ -89,7 +87,7 @@ def call_once_and_share(func, shape, dtype, rank=0):
def
shared_tensor
(
shape
,
dtype
=
torch
.
float32
):
def
shared_tensor
(
shape
,
dtype
=
torch
.
float32
):
"""Create a tensor in shared memory accessible by all processes within the same
"""Create a tensor in shared memory accessible by all processes within the same
``torch.dist
s
ributed`` process group.
``torch.distributed`` process group.
The content is uninitialized.
The content is uninitialized.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment