dgl.dataloading.rst 2.01 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
.. _api-dataloading:

dgl.dataloading
=================================

.. automodule:: dgl.dataloading

DataLoaders
-----------
.. currentmodule:: dgl.dataloading.pytorch

12
13
14
15
16
DGL DataLoader for mini-batch training works similarly to PyTorch's DataLoader.
It has a generator interface that returns mini-batches sampled from some given graphs.
DGL provides two DataLoaders: a ``NodeDataLoader`` for node classification task
and an ``EdgeDataLoader`` for edge/link prediction task.

17
18
.. autoclass:: NodeDataLoader
.. autoclass:: EdgeDataLoader
19
.. autoclass:: GraphDataLoader
20

21
.. _api-dataloading-neighbor-sampling:
22

23
Neighbor Sampler
24
-----------------------------
25
.. currentmodule:: dgl.dataloading.neighbor
26

27
28
29
30
Neighbor samplers are classes that control the behavior of ``DataLoader`` s
to sample neighbors. All of them inherit the base :class:`BlockSampler` class, but implement
different neighbor sampling strategies by overriding the ``sample_frontier`` or
the ``sample_blocks`` methods.
31
32
33
34
35
36

.. autoclass:: BlockSampler
    :members: sample_frontier, sample_blocks

.. autoclass:: MultiLayerNeighborSampler
    :members: sample_frontier
37
38
39
40
    :show-inheritance:

.. autoclass:: MultiLayerFullNeighborSampler
    :show-inheritance:
41

42
.. _api-dataloading-negative-sampling:
43
44
45
46
47

Negative Samplers for Link Prediction
-------------------------------------
.. currentmodule:: dgl.dataloading.negative_sampler

48
49
50
Negative samplers are classes that control the behavior of the ``EdgeDataLoader``
to generate negative edges.

51
52
.. autoclass:: Uniform
    :members: __call__
53
54
55
56
57

Async Copying to/from GPUs
--------------------------
.. currentmodule:: dgl.dataloading

58
Data can be copied from the CPU to the GPU
59
60
while the GPU is being used for
computation, using the :class:`AsyncTransferer`.
61
62
63
64
65
For the transfer to be fully asynchronous, the context the
:class:`AsyncTranserer`
is created with must be a GPU context, and the input tensor must be in 
pinned memory.

66
67
68
69
70
71

.. autoclass:: AsyncTransferer
    :members: __init__, async_copy

.. autoclass:: async_transferer.Transfer
    :members: wait