For a more comprehensive tutorial, check out our `Quickstart Notebook <https://github.com/NVIDIA/TransformerEngine/blob/main/docs/examples/quickstart.ipynb>`_.
For a more comprehensive tutorial, check out our `Getting Started Guide <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/getting_started.html>`_.
.. overview-end-marker-do-not-remove
...
...
@@ -175,15 +175,22 @@ For example to use the NGC PyTorch container interactively,
.. code-block:: bash
docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:25.08-py3
docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:26.01-py3
For example to use the NGC JAX container interactively,
.. code-block:: bash
docker run --gpus all -it --rm nvcr.io/nvidia/jax:25.08-py3
docker run --gpus all -it --rm nvcr.io/nvidia/jax:26.01-py3
Where 25.08 (corresponding to August 2025 release) is the container version.
Where 26.01 (corresponding to January 2026 release) is the container version.
We recommend updating to the latest NGC container available here:
If you run any examples, please ensure you are using a matching version of TransformerEngine. TransformerEngine is pre-built and packaged inside the containers with examples available at ``/opt/transformerengine`` or ``/opt/transformer-engine``. If you would like to use examples from TE main branch and are running into import errors, please try the latest pip package or building from source, although NGC containers are recommended for ease-of-use for most users.
***Solution:**ThiscanoccurwhenTEisbuiltagainstthecontainer's system installation of cuDNN, but pip packages inside the virtual environment pull in pip packages for ``nvidia-cudnn-cu12/cu13``. To resolve this, when building TE from source please specify the following environment variables to point to the cuDNN in your virtual environment.
* **Symptoms:** Regular TE installs work correctly but UV wheel builds fail at runtime.
* **Solution:** Ensure that ``uv build --wheel --no-build-isolation -v`` is used during the wheel build as well as the pip installation of the wheel. Use ``-v`` for verbose output to verify that TE is not pulling in a mismatching version of PyTorch or JAX that differs from the UV environment'sversion.
***Solution:**Ensure``--no-build-isolation``isusedduringinstallation.Ifpre-buildingwheels,ensurethatthewheelisbothbuiltandinstalledwith``--no-build-isolation``.See"Problems using UV or Virtual Environments"aboveifusingUV.
Copyright (c) 2022-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
See LICENSE for license information.
FP8 Current Scaling
===================================
FP8 current scaling recipe is the simplest low precision recipe provided by Transformer Engine.
To understand how this recipe works, we first need to examine what the FP8 data type is and how it differs from other floating point formats.
FP8 data type
-------------
The FP8 datatype, introduced in Hopper architecture, is actually 2 distinct datatypes, useful in different parts of the training of neural networks:
* E4M3 -- consists of 1 sign bit, 4 exponent bits and 3 bits of mantissa. It can store values up to +/-448 and ``nan``.
* E5M2 -- consists of 1 sign bit, 5 exponent bits and 2 bits of mantissa. It can store values up to +/-57344, +/- ``inf`` and ``nan``. The tradeoff of the increased dynamic range is lower precision of the stored values.
.. raw:: html
:file: img/fp8_formats.svg
*Figure 1: Structure of the floating point datatypes. All of the values shown (in FP16, BF16, FP8 E4M3 and FP8 E5M2) are the closest representations of value 0.3952.*
**E4M3 and E5M2 usage in training**
By default, Transformer Engine uses a hybrid approach:
* *Forward pass* - activations and weights require more precision, so E4M3 datatype is used to store them.
* *Backward pass* - gradients are less susceptible to precision loss but require higher dynamic range, so E5M2 datatype is preferred.
The user can configure this behavior via the ``fp8_format`` parameter of the recipe.
Scaling factors
---------------
Limited dynamic range of FP8 datatype is insufficient for many tensors.
To address this, values in the tensor are scaled. FP8 Current Scaling recipe uses one **FP32** scale factor per tensor. The representation of a tensor element ``x`` in FP8 precision is given by:
.. code-block:: python
x = x_fp8 * s
where
* ``x_fp8`` is the FP8 value (E4M3 or E5M2),
* ``s`` is a global **FP32** scaling factor applied to the entire tensor.
**FP8 Current Scaling quantization**
Let's take a closer look at how quantization to FP8 with scaling factor is implemented in
the FP8 Current Scaling recipe.
.. raw:: html
:file: img/fp8_scaling_concept.svg
*Figure 3: Quantization to FP8 consists of amax (absolute maximum) computation, scaling to fit the FP8 range and casting to the respective FP8 format.*
Quantization to FP8 consists of 3 steps:
1. Computation of the absolute maximum value of the tensor - we refer to it as ``amax``.
2. Applying the scaling factor of ``fp8_max / amax`` to the tensor, to fit it into the FP8 range
3. Casting into the respective FP8 format using *Round To Nearest Even (RTNE)*. Values round to the nearest representable FP8 value. When exactly halfway between two values, rounds to the one with even mantissa to minimize systematic bias.
**Performance analysis**
Quantization is a memory-bound operation that requires reading the tensor twice:
* First read: compute ``amax`` across all elements.
* Second read: apply the scaling factor and cast to FP8.
This is a significant overhead compared to other recipes, which typically require only a single memory read.
.. raw:: html
:file: img/fp8_cast_process.svg
*Figure 4: FP8 quantization with current scaling recipe - two tensor reads are needed, one to compute amax and one to apply the scaling factor and cast to FP8.*
Transpose handling
------------------
*Ada and Hopper*
On Ada and Hopper, the backward pass requires a transposed FP8 tensor.
The columnwise layout is physically different from the rowwise layout, so a transpose operation is needed.
All 3 options from :ref:`Performance Considerations Transpose handling section <handling_transposes>` are supported.
*Blackwell and later*
Blackwell hardware supports multiple GEMM layouts natively, eliminating the need for explicit transposes.
The rowwise and columnwise tensors share the same physical memory layout.