Commit e215dd41 authored by Michael Carilli's avatar Michael Carilli
Browse files

Updating READMEs

parent 18afea3b
...@@ -26,10 +26,10 @@ The intention of `amp` is to be the "on-ramp" to easy FP16 training: achieve all ...@@ -26,10 +26,10 @@ The intention of `amp` is to be the "on-ramp" to easy FP16 training: achieve all
The intention of `FP16_Optimizer` is to be the "highway" for FP16 training: achieve most of the numerically stability of full FP32 training, and almost all the performance benefits of full FP16 training. The intention of `FP16_Optimizer` is to be the "highway" for FP16 training: achieve most of the numerically stability of full FP32 training, and almost all the performance benefits of full FP16 training.
[Python Source](https://github.com/NVIDIA/apex/tree/master/apex/fp16_utils)
[API Documentation](https://nvidia.github.io/apex/fp16_utils.html#automatic-management-of-master-params-loss-scaling) [API Documentation](https://nvidia.github.io/apex/fp16_utils.html#automatic-management-of-master-params-loss-scaling)
[Python Source](https://github.com/NVIDIA/apex/tree/master/apex/fp16_utils)
[Simple examples with FP16_Optimizer](https://github.com/NVIDIA/apex/tree/master/examples/FP16_Optimizer_simple) [Simple examples with FP16_Optimizer](https://github.com/NVIDIA/apex/tree/master/examples/FP16_Optimizer_simple)
[Imagenet with FP16_Optimizer](https://github.com/NVIDIA/apex/tree/master/examples/imagenet) [Imagenet with FP16_Optimizer](https://github.com/NVIDIA/apex/tree/master/examples/imagenet)
......
# Simple examples of FP16_Optimizer functionality # Simple examples of FP16_Optimizer functionality
#### Minimal Working Sample
`minimal.py` shows the basic usage of `FP16_Optimizer` with either static or dynamic loss scaling. Test via `minimal.py` shows the basic usage of `FP16_Optimizer` with either static or dynamic loss scaling. Test via
```bash ```bash
python minimal.py python minimal.py
``` ```
#### Closures
`FP16_Optimizer` supports closures with the same control flow as ordinary Pytorch optimizers. `FP16_Optimizer` supports closures with the same control flow as ordinary Pytorch optimizers.
`closure.py` shows an example. Test via `closure.py` shows an example. Test via
```bash ```bash
...@@ -12,6 +14,7 @@ python closure.py ...@@ -12,6 +14,7 @@ python closure.py
``` ```
See [the API documentation](https://nvidia.github.io/apex/fp16_utils.html#apex.fp16_utils.FP16_Optimizer.step) for more details. See [the API documentation](https://nvidia.github.io/apex/fp16_utils.html#apex.fp16_utils.FP16_Optimizer.step) for more details.
#### Checkpointing
`FP16_Optimizer` also supports checkpointing with the same control flow as ordinary Pytorch optimizers. `FP16_Optimizer` also supports checkpointing with the same control flow as ordinary Pytorch optimizers.
`save_load.py` shows an example. Test via `save_load.py` shows an example. Test via
```bash ```bash
...@@ -19,6 +22,7 @@ python save_load.py ...@@ -19,6 +22,7 @@ python save_load.py
``` ```
See [the API documentation](https://nvidia.github.io/apex/fp16_utils.html#apex.fp16_utils.FP16_Optimizer.load_state_dict) for more details. See [the API documentation](https://nvidia.github.io/apex/fp16_utils.html#apex.fp16_utils.FP16_Optimizer.load_state_dict) for more details.
#### Distributed
**distributed_pytorch** shows an example using `FP16_Optimizer` with Pytorch DistributedDataParallel. **distributed_pytorch** shows an example using `FP16_Optimizer` with Pytorch DistributedDataParallel.
The usage of `FP16_Optimizer` with distributed does not need to change from ordinary single-process The usage of `FP16_Optimizer` with distributed does not need to change from ordinary single-process
usage. Run via usage. Run via
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment