Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
a19e0efd
"git@developer.sourcefind.cn:OpenDAS/vision.git" did not exist on "81d764877523e491af03d3e37f6b295f9d8a6cbb"
Unverified
Commit
a19e0efd
authored
Oct 11, 2023
by
Ramon Zhou
Committed by
GitHub
Oct 11, 2023
Browse files
[GraphBolt] Modify the doc string of DistributedItemSampler (#6420)
parent
7d87dcec
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
95 additions
and
42 deletions
+95
-42
python/dgl/graphbolt/item_sampler.py
python/dgl/graphbolt/item_sampler.py
+95
-42
No files found.
python/dgl/graphbolt/item_sampler.py
View file @
a19e0efd
...
...
@@ -345,6 +345,10 @@ class DistributedItemSampler(ItemSampler):
counterparts. The original item set is sharded such that each replica
(process) receives an exclusive subset.
Note: DistributedItemSampler may not work as expected when it is the last
datapipe before the data is fetched. Please wrap a SingleProcessDataLoader
or another datapipe on it.
Note: The items will be first sharded onto each replica, then get shuffled
(if needed) and batched. Therefore, each replica will always get a same set
of items.
...
...
@@ -387,48 +391,97 @@ class DistributedItemSampler(ItemSampler):
Examples
--------
1. num_replica = 4, batch_size = 2, shuffle = False, drop_last = False,
drop_uneven_inputs = False, item_set = [0, 1, 2, ..., 7, 8, 9]
- Replica#0 gets [[0, 4], [8]]
- Replica#1 gets [[1, 5], [9]]
- Replica#2 gets [[2, 6]]
- Replica#3 gets [[3, 7]]
2. num_replica = 4, batch_size = 2, shuffle = False, drop_last = True,
drop_uneven_inputs = False, item_set = [0, 1, 2, ..., 7, 8, 9].
- Replica#0 gets [[0, 4]]
- Replica#1 gets [[1, 5]]
- Replica#2 gets [[2, 6]]
- Replica#3 gets [[3, 7]]
3. num_replica = 4, batch_size = 2, shuffle = False, drop_last = True,
drop_uneven_inputs = False, item_set = [0, 1, 2, ..., 11, 12, 13].
- Replica#0 gets [[0, 4], [8, 12]]
- Replica#1 gets [[1, 5], [9, 13]]
- Replica#2 gets [[2, 6]]
- Replica#3 gets [[3, 7]]
3. num_replica = 4, batch_size = 2, shuffle = False, drop_last = False,
drop_uneven_inputs = True, item_set = [0, 1, 2, ..., 11, 12, 13].
- Replica#0 gets [[0, 4], [8, 12]]
- Replica#1 gets [[1, 5], [9, 13]]
- Replica#2 gets [[2, 6], [10]]
- Replica#3 gets [[3, 7], [11]]
4. num_replica = 4, batch_size = 2, shuffle = False, drop_last = True,
drop_uneven_inputs = True, item_set = [0, 1, 2, ..., 11, 12, 13].
- Replica#0 gets [[0, 4]]
- Replica#1 gets [[1, 5]]
- Replica#2 gets [[2, 6]]
- Replica#3 gets [[3, 7]]
5. num_replica = 4, batch_size = 2, shuffle = True, drop_last = True,
drop_uneven_inputs = False, item_set = [0, 1, 2, ..., 11, 12, 13].
One possible output:
- Replica#0 gets [[8, 0], [12, 4]]
- Replica#1 gets [[13, 1], [9, 5]]
- Replica#2 gets [[10, 2]]
- Replica#3 gets [[7, 11]]
0. Preparation: DistributedItemSampler needs multi-processing environment to
work. You need to spawn subprocesses and initialize processing group before
executing following examples. Due to randomness, the output is not always
the same as listed below.
>>> import torch
>>> from dgl import graphbolt as gb
>>> item_set = gb.ItemSet(torch.arange(0, 13))
>>> num_replicas = 4
>>> batch_size = 2
>>> mp.spawn(...)
1. shuffle = False, drop_last = False, drop_uneven_inputs = False.
>>> item_sampler = gb.DistributedItemSampler(
>>> item_set, batch_size=2, shuffle=False, drop_last=False,
>>> drop_uneven_inputs=False
>>> )
>>> data_loader = gb.SingleProcessDataLoader(item_sampler)
>>> print(f"Replica#{proc_id}: {list(data_loader)})
Replica#0: [tensor([0, 4]), tensor([ 8, 12])]
Replica#1: [tensor([1, 5]), tensor([ 9, 13])]
Replica#2: [tensor([2, 6]), tensor([10])]
Replica#3: [tensor([3, 7]), tensor([11])]
2. shuffle = False, drop_last = True, drop_uneven_inputs = False.
>>> item_sampler = gb.DistributedItemSampler(
>>> item_set, batch_size=2, shuffle=False, drop_last=True,
>>> drop_uneven_inputs=False
>>> )
>>> data_loader = gb.SingleProcessDataLoader(item_sampler)
>>> print(f"Replica#{proc_id}: {list(data_loader)})
Replica#0: [tensor([0, 4]), tensor([ 8, 12])]
Replica#1: [tensor([1, 5]), tensor([ 9, 13])]
Replica#2: [tensor([2, 6])]
Replica#3: [tensor([3, 7])]
3. shuffle = False, drop_last = False, drop_uneven_inputs = True.
>>> item_sampler = gb.DistributedItemSampler(
>>> item_set, batch_size=2, shuffle=False, drop_last=False,
>>> drop_uneven_inputs=True
>>> )
>>> data_loader = gb.SingleProcessDataLoader(item_sampler)
>>> print(f"Replica#{proc_id}: {list(data_loader)})
Replica#0: [tensor([0, 4]), tensor([ 8, 12])]
Replica#1: [tensor([1, 5]), tensor([ 9, 13])]
Replica#2: [tensor([2, 6]), tensor([10])]
Replica#3: [tensor([3, 7]), tensor([11])]
4. shuffle = False, drop_last = True, drop_uneven_inputs = True.
>>> item_sampler = gb.DistributedItemSampler(
>>> item_set, batch_size=2, shuffle=False, drop_last=True,
>>> drop_uneven_inputs=True
>>> )
>>> data_loader = gb.SingleProcessDataLoader(item_sampler)
>>> print(f"Replica#{proc_id}: {list(data_loader)})
Replica#0: [tensor([0, 4])]
Replica#1: [tensor([1, 5])]
Replica#2: [tensor([2, 6])]
Replica#3: [tensor([3, 7])]
5. shuffle = True, drop_last = True, drop_uneven_inputs = False.
>>> item_sampler = gb.DistributedItemSampler(
>>> item_set, batch_size=2, shuffle=True, drop_last=True,
>>> drop_uneven_inputs=False
>>> )
>>> data_loader = gb.SingleProcessDataLoader(item_sampler)
>>> print(f"Replica#{proc_id}: {list(data_loader)})
(One possible output:)
Replica#0: [tensor([0, 8]), tensor([ 4, 12])]
Replica#1: [tensor([ 5, 13]), tensor([9, 1])]
Replica#2: [tensor([ 2, 10])]
Replica#3: [tensor([11, 7])]
6. shuffle = True, drop_last = True, drop_uneven_inputs = True.
>>> item_sampler = gb.DistributedItemSampler(
>>> item_set, batch_size=2, shuffle=True, drop_last=True,
>>> drop_uneven_inputs=True
>>> )
>>> data_loader = gb.SingleProcessDataLoader(item_sampler)
>>> print(f"Replica#{proc_id}: {list(data_loader)})
(One possible output:)
Replica#0: [tensor([8, 0])]
Replica#1: [tensor([ 1, 13])]
Replica#2: [tensor([10, 6])]
Replica#3: [tensor([ 3, 11])]
"""
def
__init__
(
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment