(as introduced by https://info.nvidia.com/webinar-mixed-precision-with-pytorch-reg-page.html. The `amp` and `FP16_Optimizer` tools currently in master are separate prototypes, which will be unified by the Amp 1.0 API.)
Branch `api_refactor` is tracking my progress. I will merge to master, along with documentation and examples, by the end of February.
# Introduction
This repository holds NVIDIA-maintained utilities to streamline
raiseAttributeError("channel_last is not supported by primitive SyncBatchNorm implementation. Try install apex with `--cuda_ext` if channel_last is desired.")
ifnotSyncBatchNorm.warned:
print("Warning: using Python fallback for SyncBatchNorm, possibly because apex was installed without --cuda_ext. The exception raised when attempting to import the cuda backend was: ",self.syncbn_import_error)