"examples/vscode:/vscode.git/clone" did not exist on "dca2580b8494f2358ef2634bd8d16ffd213fe30f"
Commit 034b8f02 authored by Michael Carilli's avatar Michael Carilli
Browse files

Updating imagenet README

parent 37a4b221
...@@ -7,8 +7,7 @@ It implements training of popular model architectures, such as ResNet, AlexNet, ...@@ -7,8 +7,7 @@ It implements training of popular model architectures, such as ResNet, AlexNet,
`main_fp16_optimizer.py` with `--fp16` demonstrates use of `apex.fp16_utils.FP16_Optimizer` to automatically manage master parameters and loss scaling. `main_fp16_optimizer.py` with `--fp16` demonstrates use of `apex.fp16_utils.FP16_Optimizer` to automatically manage master parameters and loss scaling.
`apex.parallel.DistributedDataParallel` automatically allreduces and averages gradients during `backward()`. If you wish to control the allreduce manually instead (for example, to carry out the allreduce every few iterations instead of every iteration), [apex.parallel.reduce](https://nvidia.github.io/apex/parallel.html#apex.parallel.Reducer) provides a convenient wrapper. `main_reducer.py` is identical to `main.py`, except that it shows the use of [apex.parallel.Reduce](https://nvidia.github.io/apex/parallel.html#apex.parallel.Reducer) instead of `DistributedDataParallel`.
`main_reducer.py` is identical to `main.py`, except that it shows the use of `Reducer` instead of `DistributedDataParallel`.
## Requirements ## Requirements
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment