• Khoa Ho's avatar
    Clarify mixed precision training support (#766) · d5f76d74
    Khoa Ho authored
    Summary:
    Change the wording to avoid confusion. Mixed precision ensures both higher arithmetic throughput and numerical stability, not exactly synonymous to pure half-precision/FP16 training. Also add mentioning of tensor cores since older generation GPUs without tensor cores don't support true mixed precision training.
    Pull Request resolved: https://github.com/pytorch/fairseq/pull/766
    
    Differential Revision: D15559565
    
    Pulled By: myleott
    
    fbshipit-source-id: c71e720772657bb3e8ad330b58bf69e23beb614e
    d5f76d74
README.md 5.82 KB