".github/git@developer.sourcefind.cn:OpenDAS/mmcv.git" did not exist on "23b2bdbf52c8c4960dc696ec35901146f839fd6d"
Commit 9ce80178 authored by Michael Carilli's avatar Michael Carilli
Browse files

Merge branch 'master' of https://github.com/NVIDIA/apex

parents f8557569 f17cd953
......@@ -80,12 +80,12 @@ CUDA and C++ extensions via
```
$ git clone https://github.com/NVIDIA/apex
$ cd apex
$ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" .
$ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
```
Apex also supports a Python-only build (required with Pytorch 0.4) via
```
$ pip install -v --no-cache-dir .
$ pip install -v --no-cache-dir ./
```
A Python-only build omits:
- Fused kernels required to use `apex.optimizers.FusedAdam`.
......
......@@ -145,6 +145,11 @@ Gradient accumulation across iterations
The following should "just work," and properly accommodate multiple models/optimizers/losses, as well as
gradient clipping via the `instructions above`_::
# If your intent is to simulate a larger batch size using gradient accumulation,
# you can divide the loss by the number of accumulation iterations (so that gradients
# will be averaged over that many iterations):
loss = loss/iters_to_accumulate
if iter%iters_to_accumulate == 0:
# Every iters_to_accumulate iterations, unscale and step
with amp.scale_loss(loss, optimizer) as scaled_loss:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment