Unverified Commit 33512f93 authored by mcarilli's avatar mcarilli Committed by GitHub
Browse files

Update README.md

parent b83e38a6
...@@ -9,7 +9,7 @@ The trained model can then be used by the generate script to generate new text. ...@@ -9,7 +9,7 @@ The trained model can then be used by the generate script to generate new text.
`main_fp16_optimizer.py` with `--fp16` demonstrates use of `apex.fp16_utils.FP16_Optimizer` to automatically manage master parameters and loss scaling. `main_fp16_optimizer.py` with `--fp16` demonstrates use of `apex.fp16_utils.FP16_Optimizer` to automatically manage master parameters and loss scaling.
This example is intended as an illustration of the mixed precision recipe, not necessarily as a performance showcase. However, it does demonstrate certain best practices. With `--fp16`, to enable Tensor Core use and achieve best performance, dimensions that participate in GEMMs in the model should be multiples of 8. Specifically, these are This example is intended as an illustration of the mixed precision recipe, not necessarily as a performance showcase. However, it does demonstrate certain best practices. With `--fp16`, to enable Tensor Core use and improve performance, dimensions that participate in GEMMs in the model should be multiples of 8. Specifically, these are
* dictionary length (ntokens in `main.py`), * dictionary length (ntokens in `main.py`),
* embedding size (`--emsize`), * embedding size (`--emsize`),
* hidden size (`--nhid`), and * hidden size (`--nhid`), and
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment