README.md 2.48 KB
Newer Older
1
2
# Simple examples of FP16_Optimizer functionality

Michael Carilli's avatar
Michael Carilli committed
3
4
5
6
7
8
9
10
To use `FP16_Optimizer` on a half-precision model, or a model with a mixture of 
half and float parameters, only two lines of your training script need to change:
1. Construct an `FP16_Optimizer` instance from an existing optimizer.
2. Replace `loss.backward()` with `optimizer.backward(loss)`.
[Full API Documentation](https://nvidia.github.io/apex/fp16_utils.html#automatic-management-of-master-params-loss-scaling)

See "Other Options" at the bottom of this page for some cases that require special treatment.

Michael Carilli's avatar
Michael Carilli committed
11
#### Minimal Working Sample
12
`minimal.py` shows the basic usage of `FP16_Optimizer` with either static or dynamic loss scaling.  Test via `python minimal.py`.
13

Michael Carilli's avatar
Michael Carilli committed
14
#### Closures
15
`FP16_Optimizer` supports closures with the same control flow as ordinary Pytorch optimizers.  
16
17
`closure.py` shows an example.  Test via `python closure.py`.

18
See [the API documentation](https://nvidia.github.io/apex/fp16_utils.html#apex.fp16_utils.FP16_Optimizer.step) for more details.
19

Michael Carilli's avatar
Michael Carilli committed
20
21
#### Serialization/Deserialization
`FP16_Optimizer` supports saving and loading with the same control flow as ordinary Pytorch optimizers.
22
23
`save_load.py` shows an example.  Test via `python save_load.py`.

24
See [the API documentation](https://nvidia.github.io/apex/fp16_utils.html#apex.fp16_utils.FP16_Optimizer.load_state_dict) for more details.
25

Michael Carilli's avatar
Michael Carilli committed
26
#### Distributed
27
**distributed_apex** shows an example using `FP16_Optimizer` with Apex DistributedDataParallel.
28
The usage of `FP16_Optimizer` with distributed does not need to change from ordinary single-process 
29
usage. Test via
30
```bash
31
cd distributed_apex
32
33
34
bash run.sh
```

35
**distributed_pytorch** shows an example using `FP16_Optimizer` with Pytorch DistributedDataParallel.
36
Again, the usage of `FP16_Optimizer` with distributed does not need to change from ordinary 
37
single-process usage.  Test via
38
```bash
39
cd distributed_pytorch
40
41
bash run.sh
```
Michael Carilli's avatar
Michael Carilli committed
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59

#### Other Options

Gradient clipping requires that calls to `torch.nn.utils.clip_grad_norm"
be replaced with [fp16_optimizer_instance.clip_master_grads](https://nvidia.github.io/apex/fp16_utils.html#apex.fp16_utils.FP16_Optimizer.clip_master_grads).

Multiple losses will work if you simply replace
```bash
loss1.backward()
loss2.backward()
```
with 
```bash
optimizer.backward(loss1)
optimizer.backward(loss2)
```
but `FP16_Optimizer` can be told to handle this more efficiently using the 
[update_master_grads](https://nvidia.github.io/apex/fp16_utils.html#apex.fp16_utils.FP16_Optimizer.update_master_grads) option.