Unverified Commit 71ca7944 authored by Christopher Akiki's avatar Christopher Akiki Committed by GitHub
Browse files

Fix typo in perf docs (#19705)

parent fd5eac5f
......@@ -311,7 +311,7 @@ We can see that this saved some more memory but at the same time training became
## Floating Data Types
The idea of mixed precision training is that no all variables need to be stored in full (32-bit) floating point precision. If we can reduce the precision the variales and their computations are faster. Here are the commonly used floating point data types choice of which impacts both memory usage and throughput:
The idea of mixed precision training is that no all variables need to be stored in full (32-bit) floating point precision. If we can reduce the precision the variables and their computations are faster. Here are the commonly used floating point data types choice of which impacts both memory usage and throughput:
- fp32 (`float32`)
- fp16 (`float16`)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment