minibatch-gpu-sampling.rst 5.31 KB
Newer Older
1
2
3
4
5
6
.. _guide-minibatch-gpu-sampling:

6.7 Using GPU for Neighborhood Sampling
---------------------------------------

DGL since 0.7 has been supporting GPU-based neighborhood sampling, which has a significant
7
8
9
speed advantage over CPU-based neighborhood sampling.  If you estimate that your graph 
can fit onto GPU and your model does not take a lot of GPU memory, then it is best to
put the graph onto GPU memory and use GPU-based neighbor sampling.
10
11

For example, `OGB Products <https://ogb.stanford.edu/docs/nodeprop/#ogbn-products>`_ has
12
13
14
2.4M nodes and 61M edges.  The graph takes less than 1GB since the memory consumption of
a graph depends on the number of edges.  Therefore it is entirely possible to fit the
whole graph onto GPU.
15

16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Put the node features onto GPU memory
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If the node features can also fit onto GPU memory, it is recommended to put them onto GPU
to reduce the time for data transfer from CPU to GPU, which usually becomes a bottleneck
when using GPU for sampling. For exampling, in the above OGB Products, each node has
100-dimensional features and they take less than 1GB memory in total. It is easy to
transfer these features to GPU before training via the following code.

.. code:: python

   # pop the features and labels
   features = g.ndata.pop('features')
   labels = g.ndata.pop('labels')
   # put them onto GPU
   features = features.to('cuda:0')
   labels = labels.to('cuda:0')

If the node features are too large to fit onto GPU memory, :class:`~dgl.contrib.UnifiedTensor`
enables GPU zero-copy access to the features stored on CPU memory and greatly reduces
the time for data transfer from CPU to GPU.
37
38
39
40
41


Using GPU-based neighborhood sampling in DGL data loaders
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

42
One can use GPU-based neighborhood sampling with DGL data loaders via:
43
44
45

* Putting the graph onto GPU.

46
47
* Set ``device`` argument to a GPU device.

48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
* Set ``num_workers`` argument to 0, because CUDA does not allow multiple processes
  accessing the same context.

All the other arguments for the :class:`~dgl.dataloading.pytorch.NodeDataLoader` can be
the same as the other user guides and tutorials.

.. code:: python

   g = g.to('cuda:0')
   dataloader = dgl.dataloading.NodeDataLoader(
       g,                                # The graph must be on GPU.
       train_nid,
       sampler,
       device=torch.device('cuda:0'),    # The device argument must be GPU.
       num_workers=0,                    # Number of workers must be 0.
       batch_size=1000,
       drop_last=False,
       shuffle=True)
66
67
68
69
70
71
72
73
74
75
76
77

GPU-based neighbor sampling also works for :class:`~dgl.dataloading.pytorch.EdgeDataLoader` since DGL 0.8.

.. note::

  GPU-based neighbor sampling also works for custom neighborhood samplers as long as
  (1) your sampler is subclassed from :class:`~dgl.dataloading.BlockSampler`, and (2)
  your sampler entirely works on GPU.


Using CUDA UVA-based neighborhood sampling in DGL data loaders
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
78
79

.. note::
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
   New feature introduced in DGL 0.8.

For the case where the graph is too large to fit onto the GPU memory, we introduce the
CUDA UVA (Unified Virtual Addressing)-based sampling, in which GPUs perform the sampling
on the graph pinned on CPU memory via zero-copy access.
You can enable UVA-based neighborhood sampling in DGL data loaders via:

* Pin the graph to page-locked memory via :func:`dgl.DGLGraph.pin_memory_`.

* Set ``device`` argument to a GPU device.

* Set ``num_workers`` argument to 0, because CUDA does not allow multiple processes
  accessing the same context.

All the other arguments for the :class:`~dgl.dataloading.pytorch.NodeDataLoader` can be
the same as the other user guides and tutorials.
UVA-based neighbor sampling also works for :class:`~dgl.dataloading.pytorch.EdgeDataLoader`.

.. code:: python

   g = g.pin_memory_()
   dataloader = dgl.dataloading.NodeDataLoader(
       g,                                # The graph must be pinned.
       train_nid,
       sampler,
       device=torch.device('cuda:0'),    # The device argument must be GPU.
       num_workers=0,                    # Number of workers must be 0.
       batch_size=1000,
       drop_last=False,
       shuffle=True)

UVA-based sampling is the recommended solution for mini-batch training on large graphs,
especially for multi-GPU training.

.. note::

  To use UVA-based sampling in multi-GPU training, you should first materialize all the
  necessary sparse formats of the graph and copy them to the shared memory explicitly
  before spawning training processes. Then you should pin the shared graph in each training
  process respectively. Refer to our `GraphSAGE example <https://github.com/dmlc/dgl/blob/master/examples/pytorch/graphsage/train_sampling_multi_gpu.py>`_ for more details.
120
121
122
123
124


Using GPU-based neighbor sampling with DGL functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

125
126
You can build your own GPU sampling pipelines with the following functions that support
operating on GPU:
127
128
129
130
131

* :func:`dgl.sampling.sample_neighbors`

  * Only has support for uniform sampling; non-uniform sampling can only run on CPU.

132
133
134
135
136
137
138
139
140
141
142
Subgraph extraction ops:

* :func:`dgl.node_subgraph`
* :func:`dgl.edge_subgraph`
* :func:`dgl.in_subgraph`
* :func:`dgl.out_subgraph`

Graph transform ops for subgraph construction:

* :func:`dgl.to_block`
* :func:`dgl.compact_graph`