[Feature]: Support auto_fp16 using torch.cuda.amp when PyTorch >= 1.6.0 (#951)
* add torch.cuda.amp to fp16_utils and optimizers * use with context manager for autocast * add doc to explain the behavior differences between real amp and ours * fix docstring
Showing
Please register or sign in to comment