parallel.rst 802 Bytes
Newer Older
Hang Zhang's avatar
docs  
Hang Zhang committed
1
2
3
.. role:: hidden
    :class: hidden-section

Hang Zhang's avatar
Hang Zhang committed
4
5
encoding.parallel
=================
Hang Zhang's avatar
docs  
Hang Zhang committed
6

Hang Zhang's avatar
Hang Zhang committed
7
- Current PyTorch DataParallel Table is not supporting mutl-gpu loss calculation, which makes the gpu memory usage very in-balance. We address this issue here by doing DataParallel for Model & Criterion. 
Hang Zhang's avatar
sync BN  
Hang Zhang committed
8
9

.. note::
Hang Zhang's avatar
Hang Zhang committed
10
    Deprecated, please use torch.nn.parallel.DistributedDataParallel with :class:`encoding.nn.DistSyncBatchNorm` for the best performance.
Hang Zhang's avatar
docs  
Hang Zhang committed
11

Hang Zhang's avatar
v1.0.1  
Hang Zhang committed
12
13
.. automodule:: encoding.parallel
.. currentmodule:: encoding.parallel
Hang Zhang's avatar
docs  
Hang Zhang committed
14

Hang Zhang's avatar
Hang Zhang committed
15
:hidden:`DataParallelModel`
Hang Zhang's avatar
docs  
Hang Zhang committed
16
17
~~~~~~~~~~~~~~~~~~~~~~~~~~~

Hang Zhang's avatar
Hang Zhang committed
18
.. autoclass:: DataParallelModel
Hang Zhang's avatar
docs  
Hang Zhang committed
19
20
    :members:

Hang Zhang's avatar
Hang Zhang committed
21
:hidden:`DataParallelCriterion`
Hang Zhang's avatar
docs  
Hang Zhang committed
22
23
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Hang Zhang's avatar
Hang Zhang committed
24
.. autoclass:: DataParallelCriterion
Hang Zhang's avatar
docs  
Hang Zhang committed
25
26
    :members:

Hang Zhang's avatar
v1.0.1  
Hang Zhang committed
27

Hang Zhang's avatar
sync BN  
Hang Zhang committed
28
29
:hidden:`allreduce`
~~~~~~~~~~~~~~~~~~~
Hang Zhang's avatar
v1.0.1  
Hang Zhang committed
30

Hang Zhang's avatar
sync BN  
Hang Zhang committed
31
.. autofunction:: allreduce