Commit d5f76d74 authored by Khoa Ho's avatar Khoa Ho Committed by Facebook Github Bot
Browse files

Clarify mixed precision training support (#766)

Summary:
Change the wording to avoid confusion. Mixed precision ensures both higher arithmetic throughput and numerical stability, not exactly synonymous to pure half-precision/FP16 training. Also add mentioning of tensor cores since older generation GPUs without tensor cores don't support true mixed precision training.
Pull Request resolved: https://github.com/pytorch/fairseq/pull/766

Differential Revision: D15559565

Pulled By: myleott

fbshipit-source-id: c71e720772657bb3e8ad330b58bf69e23beb614e
parent ffc3bb58
...@@ -28,7 +28,7 @@ Fairseq features: ...@@ -28,7 +28,7 @@ Fairseq features:
- Diverse Beam Search ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424)) - Diverse Beam Search ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424))
- sampling (unconstrained and top-k) - sampling (unconstrained and top-k)
- large mini-batch training even on a single GPU via delayed updates - large mini-batch training even on a single GPU via delayed updates
- fast half-precision floating point (FP16) training - mixed precision training (trains faster with less GPU memory on [NVIDIA tensor cores](https://developer.nvidia.com/tensor-cores))
- extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers - extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers
We also provide [pre-trained models](#pre-trained-models-and-examples) for several benchmark We also provide [pre-trained models](#pre-trained-models-and-examples) for several benchmark
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment