Benchmarks - Keep BatchNorm as fp32 for pytorch cnn models cast to fp16 (#322)
**Description** The BatchNorm operator is not numerically stable in fp16. PyTorch documentation recommends to keep the BN op in fp32 for fp16 AMP models. Refer to https://pytorch.org/docs/stable/amp.html#ops-that-can-autocast-to-float32. Preserving BN in fp32 for superbench more accurately reflects real workloads.
Showing
Please register or sign in to comment