WestronglyencouragemovingtothenewAmpAPI,becauseit's more versatile, easier to use, and future proof. The original :class:`FP16_Optimizer` and the old "Amp" API are deprecated, and subject to removal at at any time.
**For users of the old "Amp" API**
In the new API, ``opt-level O1`` performs the same patching of the Torch namespace as the old Amp API.
However, the new API allows choosing static or dynamic loss scaling, while the old API only allowed dynamic loss scaling.
In the new API, the old call to ``amp_handle = amp.init()``, and the returned ``amp_handle``, are no
longer exposed or necessary. The new ``amp.initialize()`` does the duty of ``amp.init()`` (and more).
Therefore, any existing calls to ``amp_handle = amp.init()`` should be deleted.
The functions formerly exposed through ``amp_handle`` are now free
functions accessible through the ``amp`` module.
The backward context manager must be changed accordingly::
# old API
with amp_handle.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
->
# new API
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
For now, the deprecated "Amp" API documentation can still be found on the Github README: https://github.com/NVIDIA/apex/tree/master/apex/amp. The old API calls that `annotate user functions`_ to run
with a particular precision are still honored by the new API.
This site contains the API documentation for Apex (https://github.com/nvidia/apex),
This site contains the API documentation for Apex (https://github.com/nvidia/apex),
a Pytorch extension with NVIDIA-maintained utilities to streamline mixed precision and distributed training. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.
a Pytorch extension with NVIDIA-maintained utilities to streamline mixed precision and distributed training. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.
Installation requires CUDA 9 or later, PyTorch 0.4 or later, and Python 3. Install by running
Installation instructions can be found here: https://github.com/NVIDIA/apex#quick-start.
::
git clone https://www.github.com/nvidia/apex
cd apex
python setup.py install [--cuda_ext] [--cpp_ext]
.. toctree::
.. toctree::
:maxdepth: 1
:maxdepth: 1
...
@@ -26,12 +19,6 @@ Installation requires CUDA 9 or later, PyTorch 0.4 or later, and Python 3. Insta
...
@@ -26,12 +19,6 @@ Installation requires CUDA 9 or later, PyTorch 0.4 or later, and Python 3. Insta
amp
amp
.. toctree::
:maxdepth: 1
:caption: Legacy mixed precision utilities
fp16_utils
.. toctree::
.. toctree::
:maxdepth: 1
:maxdepth: 1
:caption: Distributed Training
:caption: Distributed Training
...
@@ -50,6 +37,12 @@ Installation requires CUDA 9 or later, PyTorch 0.4 or later, and Python 3. Insta
...
@@ -50,6 +37,12 @@ Installation requires CUDA 9 or later, PyTorch 0.4 or later, and Python 3. Insta
raiseRuntimeError("--cuda_ext was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.")
raiseRuntimeError("--cuda_ext was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.")
else:
else:
# Set up macros for forward/backward compatibility hack around