Unverified Commit 7e45be73 authored by Przemyslaw Tredak's avatar Przemyslaw Tredak Committed by GitHub
Browse files

Added the NVFP4 section to the low precision training tutorial (#2237)



* Added the NVFP4 part to the low precision tutorial
Signed-off-by: default avatarPrzemek Tredak <ptredak@nvidia.com>

* Added the runtime results
Signed-off-by: default avatarPrzemek Tredak <ptredak@nvidia.com>

* Update docs/examples/fp8_primer.ipynb
Co-authored-by: default avatarCopilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>

* Update docs/examples/fp8_primer.ipynb
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>

* Update docs/examples/fp8_primer.ipynb
Co-authored-by: default avatarCopilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>

* Update docs/examples/fp8_primer.ipynb
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>

* Update docs/examples/fp8_primer.ipynb
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>

* Update docs/examples/fp8_primer.ipynb
Co-authored-by: default avatarCopilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>

---------
Signed-off-by: default avatarPrzemek Tredak <ptredak@nvidia.com>
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>
Co-authored-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>
Co-authored-by: default avatarCopilot <175728472+Copilot@users.noreply.github.com>
parent 08779fd8
......@@ -5,9 +5,9 @@
"id": "7b3e6954",
"metadata": {},
"source": [
"# Using FP8 with Transformer Engine\n",
"# Using FP8 and FP4 with Transformer Engine\n",
"\n",
"H100 GPU introduced support for a new datatype, FP8 (8-bit floating point), enabling higher throughput of matrix multiplies and convolutions. In this example we will introduce the FP8 datatype and show how to use it with Transformer Engine.\n",
"H100 GPU introduced support for a new datatype, FP8 (8-bit floating point), enabling higher throughput of matrix multiplies and convolutions. Blackwell added support for NVFP4 and MXFP8 datatypes. In this example we will introduce these low precision datatypes and show how to use them with Transformer Engine.\n",
"\n",
"## Introduction to FP8\n",
"\n",
......@@ -100,19 +100,66 @@
"</figure>"
]
},
{
"cell_type": "markdown",
"id": "fd7b4f37-50a2-4d41-9067-cf0c471cb2d7",
"metadata": {},
"source": [
"## Beyond FP8 - training with NVFP4\n",
"\n",
"In addition to MXFP8, NVIDIA Blackwell introduced support for an even smaller, 4-bit format called NVFP4. The values are represented there in E2M1 format, able to represent values of magnitude up to +/-6.\n",
"\n",
"<figure align=\"center\" id=\"fig_8\">\n",
"<img src=\"FP4_format.png\" width=\"50%\">\n",
"<figcaption> Figure 8: FP4 E2M1 format can represent values between +/-6.</figcaption>\n",
"</figure>\n",
"\n",
"### NVFP4 Format\n",
"\n",
"NVFP4 format is similar to MXFP8 - it also uses granular scaling to preserve the dynamic range. The differences are:\n",
"\n",
" - Granularity of the scaling factors: in NVFP4 format a single scaling factor is used per block of 16 elements, whereas MXFP8 uses 1 scaling factor per block of 32 elements\n",
" - Datatype of the scaling factors: NVFP4 uses FP8 E4M3 as the scaling factor per block, whereas MXFP8 uses E8M0 as the scaling factor datatype. Choice of E4M3 for the scaling factor enables preservation of more information about mantissa, but does not enable the full dynamic range of FP32. Therefore, NVFP4 uses an additional single per-tensor FP32 scaling factor to avoid overflows.\n",
"\n",
"In the NVFP4 training recipe for weight tensors we use a different variant of the NVFP4 quantization, where a single scaling factor is shared by a 2D block of 16x16 elements. This is similar to the weight quantization scheme employed in [DeepSeek-v3 training](https://arxiv.org/abs/2412.19437v1), but with a much finer granularity.\n",
"\n",
"### NVFP4 training recipe\n",
"\n",
"The NVFP4 training recipe implemented in Transformer Engine is described in [Pretraining Large Language Models with NVFP4](https://arxiv.org/abs/2509.25149v1) paper. The main elements of the recipe are:\n",
"\n",
" - Stochastic Rounding. When quantizing gradients to NVFP4, we use stochastic rounding to avoid the bias introduced by quantization. With stochastic rounding values are rounded probabilistically to one of their two nearest representable numbers, with probabilities inversely\n",
"proportional to their distances.\n",
" - 2D Scaling. The non-square size of the quantization blocks, while increasing granularity, has a property that the quantized tensor and its transpose no longer hold the same values. This is important since the transposed tensors are used when calculating gradients of the linear layers. While most tensors are not sensitive to this issue during training, it does affect the training accuracy when applied to the weight tensors. Therefore, the weights of the linear layers are quantized using a 2D scheme, where a single scaling factor is shared by a 2D block of 16x16 elements.\n",
" - Random Hadamard Transforms. While microscaling reduces the dynamic range needed to represent tensor values, outliers can still have a\n",
"disproportionate impact on FP4 formats, degrading model accuracy. Random Hadamard transforms address this by reshaping the tensor distribution to be more Gaussian-like, which smooths outliers and makes tensors easier to represent accurately in NVFP4. In Transformer Engine, we use a 16x16 Hadamard matrix for activations and gradients when performing weight gradient computation.\n",
" - Last few layers in higher precision. The last few layers of the LLM are more sensitive to the quantization and so we recommend running them in higher precision (for example MXFP8). This is not done automatically in Transformer Engine, since TE does not have the full information about the structure of the network being trained. This can be easily achieved though by modifying the model training code to run the last few layers under a different `fp8_autocast` (or nesting 2 autocasts in order to override the recipe for a part of the network).\n",
"\n",
"The full linear layer utilizing NVFP4 is presented in Figure 9.\n",
"\n",
"<figure align=\"center\" id=\"fig_9\">\n",
"<img src=\"FP4_linear.png\" width=\"80%\">\n",
"<figcaption> Figure 9: Linear layer utilizing NVFP4</figcaption>\n",
"</figure>"
]
},
{
"cell_type": "markdown",
"id": "cf5e0b0d",
"metadata": {},
"source": [
"## Using FP8 with Transformer Engine\n",
"## Using FP8 and FP4 with Transformer Engine\n",
"\n",
"Transformer Engine library provides tools enabling easy to use training with FP8 datatype using FP8 delayed scaling and MXFP8 strategies.\n",
"Transformer Engine library provides tools enabling easy to use training with FP8 and FP4 datatypes using different strategies.\n",
"\n",
"### FP8 recipe\n",
"\n",
"The [DelayedScaling](../api/common.rst#transformer_engine.common.recipe.DelayedScaling) recipe from the `transformer_engine.common.recipe` module stores all of the required options for training with FP8 delayed scaling: length of the amax history to use for scaling factor computation, FP8 data format, etc.\n",
"Similarly, [MXFP8BlockScaling](../api/common.rst#transformer_engine.common.recipe.MXFP8BlockScaling) from the same module may be used to enable MXFP8 training."
"Transformer Engine defines a range of different low precision recipes to choose from in the `transformer_engine.common.recipe` module.\n",
"\n",
" - The [DelayedScaling](../api/common.rst#transformer_engine.common.recipe.DelayedScaling) recipe stores all of the required options for training with FP8 delayed scaling: length of the amax history to use for scaling factor computation, FP8 data format, etc.\n",
" - [Float8CurrentScaling](../api/common.rst#transformer_engine.common.recipe.Float8CurrentScaling) recipe enables current per-tensor scaling with FP8.\n",
" - [Float8BlockScaling](../api/common.rst#transformer_engine.common.recipe.Float8BlockScaling) recipe enables block scaling with FP8 as described in [DeepSeek-v3 paper](https://arxiv.org/abs/2412.19437v1).\n",
" - [MXFP8BlockScaling](../api/common.rst#transformer_engine.common.recipe.MXFP8BlockScaling) recipe enables MXFP8 training.\n",
" - [NVFP4BlockScaling](../api/common.rst#transformer_engine.common.recipe.NVFP4BlockScaling) recipe enables NVFP4 training."
]
},
{
......@@ -122,12 +169,13 @@
"metadata": {},
"outputs": [],
"source": [
"from transformer_engine.common.recipe import Format, DelayedScaling, MXFP8BlockScaling\n",
"from transformer_engine.common.recipe import Format, DelayedScaling, MXFP8BlockScaling, NVFP4BlockScaling\n",
"\n",
"fp8_format = Format.HYBRID # E4M3 during forward pass, E5M2 during backward pass\n",
"fp8_recipe = DelayedScaling(fp8_format=fp8_format, amax_history_len=16, amax_compute_algo=\"max\")\n",
"mxfp8_format = Format.E4M3 # E4M3 used everywhere\n",
"mxfp8_recipe = MXFP8BlockScaling(fp8_format=mxfp8_format)"
"mxfp8_recipe = MXFP8BlockScaling(fp8_format=mxfp8_format)\n",
"nvfp4_recipe = NVFP4BlockScaling()"
]
},
{
......@@ -135,7 +183,7 @@
"id": "f9591eb5",
"metadata": {},
"source": [
"This recipe is then used to configure the FP8 training."
"This recipe is then used to configure the low precision training."
]
},
{
......@@ -235,13 +283,13 @@
{
"data": {
"text/plain": [
"tensor([[ 0.2276, 0.2627, 0.3001, ..., 0.0346, 0.2211, 0.1188],\n",
" [-0.0963, -0.3725, 0.1717, ..., 0.0901, 0.0522, -0.3472],\n",
" [ 0.4526, 0.3482, 0.5976, ..., -0.0687, -0.0382, 0.1566],\n",
"tensor([[ 0.2276, 0.2629, 0.3000, ..., 0.1297, -0.3702, 0.1807],\n",
" [-0.0963, -0.3724, 0.1717, ..., -0.1250, -0.8501, -0.1669],\n",
" [ 0.4526, 0.3479, 0.5976, ..., 0.1685, -0.8864, -0.1977],\n",
" ...,\n",
" [ 0.1698, 0.6061, 0.0385, ..., -0.2875, -0.1152, -0.0260],\n",
" [ 0.0679, 0.2946, 0.2751, ..., -0.2284, 0.0517, -0.1441],\n",
" [ 0.1865, 0.2353, 0.9172, ..., 0.1085, 0.1135, 0.1438]],\n",
" [ 0.1698, 0.6062, 0.0385, ..., 0.4038, -0.4564, 0.0143],\n",
" [ 0.0679, 0.2947, 0.2750, ..., -0.3271, -0.4990, 0.1198],\n",
" [ 0.1865, 0.2353, 0.9170, ..., 0.0673, -0.5567, 0.1246]],\n",
" device='cuda:0', grad_fn=<_LinearBackward>)"
]
},
......@@ -263,13 +311,13 @@
{
"data": {
"text/plain": [
"tensor([[ 0.2373, 0.2674, 0.2980, ..., 0.0233, 0.2498, 0.1131],\n",
" [-0.0767, -0.3778, 0.1862, ..., 0.0858, 0.0676, -0.3369],\n",
" [ 0.4615, 0.3593, 0.5813, ..., -0.0779, -0.0349, 0.1422],\n",
"tensor([[ 0.2373, 0.2674, 0.2980, ..., 0.1134, -0.3661, 0.1650],\n",
" [-0.0767, -0.3778, 0.1862, ..., -0.1370, -0.8448, -0.1770],\n",
" [ 0.4615, 0.3593, 0.5813, ..., 0.1696, -0.8826, -0.1826],\n",
" ...,\n",
" [ 0.1914, 0.6038, 0.0382, ..., -0.2847, -0.0991, -0.0423],\n",
" [ 0.0864, 0.2895, 0.2719, ..., -0.2388, 0.0772, -0.1541],\n",
" [ 0.2019, 0.2275, 0.9027, ..., 0.1022, 0.1300, 0.1444]],\n",
" [ 0.1914, 0.6038, 0.0382, ..., 0.4049, -0.4729, 0.0118],\n",
" [ 0.0864, 0.2895, 0.2719, ..., -0.3337, -0.4922, 0.1240],\n",
" [ 0.2019, 0.2275, 0.9027, ..., 0.0706, -0.5481, 0.1356]],\n",
" device='cuda:0', grad_fn=<_LinearBackward>)"
]
},
......@@ -300,13 +348,13 @@
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[ 0.2276, 0.2629, 0.3000, ..., 0.0346, 0.2211, 0.1188],\n",
" [-0.0963, -0.3724, 0.1717, ..., 0.0901, 0.0522, -0.3470],\n",
" [ 0.4526, 0.3479, 0.5976, ..., -0.0686, -0.0382, 0.1566],\n",
"tensor([[ 0.2276, 0.2629, 0.3000, ..., 0.1297, -0.3702, 0.1807],\n",
" [-0.0963, -0.3724, 0.1717, ..., -0.1250, -0.8501, -0.1669],\n",
" [ 0.4526, 0.3479, 0.5976, ..., 0.1685, -0.8864, -0.1977],\n",
" ...,\n",
" [ 0.1698, 0.6062, 0.0385, ..., -0.2876, -0.1152, -0.0260],\n",
" [ 0.0679, 0.2947, 0.2750, ..., -0.2284, 0.0516, -0.1441],\n",
" [ 0.1865, 0.2353, 0.9170, ..., 0.1085, 0.1135, 0.1438]],\n",
" [ 0.1698, 0.6062, 0.0385, ..., 0.4038, -0.4564, 0.0143],\n",
" [ 0.0679, 0.2947, 0.2750, ..., -0.3271, -0.4990, 0.1198],\n",
" [ 0.1865, 0.2353, 0.9170, ..., 0.0673, -0.5567, 0.1246]],\n",
" device='cuda:0', grad_fn=<_LinearBackward>)\n"
]
}
......@@ -339,19 +387,14 @@
{
"data": {
"text/plain": [
"tensor([[ 4.9591e-05, -1.9073e-04, 9.5367e-05, ..., -3.8147e-06,\n",
" 4.1962e-05, 2.2888e-05],\n",
" [ 2.2888e-05, -3.4332e-05, 2.2888e-05, ..., 2.6703e-05,\n",
" 5.3406e-05, -1.4114e-04],\n",
" [-3.8147e-05, 2.6703e-04, -3.8147e-06, ..., -5.7220e-05,\n",
" 4.1962e-05, -1.9073e-05],\n",
"tensor([[0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" ...,\n",
" [ 1.1444e-05, -7.2479e-05, -3.8147e-06, ..., 5.3406e-05,\n",
" -1.5259e-05, 2.2888e-05],\n",
" [ 4.9591e-05, -9.5367e-05, 6.8665e-05, ..., -1.5259e-05,\n",
" 7.6294e-05, 4.5776e-05],\n",
" [-1.5259e-05, -7.6294e-06, 1.8692e-04, ..., -3.0518e-05,\n",
" -4.5776e-05, 7.6294e-06]], device='cuda:0', grad_fn=<SubBackward0>)"
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.]], device='cuda:0',\n",
" grad_fn=<SubBackward0>)"
]
},
"execution_count": 7,
......@@ -370,6 +413,53 @@
"source": [
"The differences in result coming from FP8 execution do not matter during the training process, but it is good to understand them, e.g. during debugging the model."
]
},
{
"cell_type": "markdown",
"id": "d45e8b6c-803b-4a4f-8835-c19b0a94bc6a",
"metadata": {},
"source": [
"### Using multiple recipes in the same training run\n",
"\n",
"Sometimes it is desirable to use multiple recipes in the same training run. An example of this is the NVFP4 training, where a few layers at the end of the training should be run in higher precision. This can be achieved by using multiple autocasts, either completely separately or in a nested way (this could be useful when e.g. we want to have a configurable overarching recipe but still hardcode a different recipe for some pieces of the network)."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c663f694-41d6-47c0-a397-5fc56e692542",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[ 0.0547, 0.0039, -0.0664, ..., -0.2061, 0.2344, -0.3223],\n",
" [ 0.0131, -0.1436, 0.0168, ..., -0.4258, 0.1562, -0.0371],\n",
" [ 0.1074, -0.2773, 0.0576, ..., -0.2070, 0.0640, -0.1611],\n",
" ...,\n",
" [ 0.0825, -0.0630, 0.0571, ..., -0.3711, 0.1562, -0.4062],\n",
" [-0.1729, -0.1138, -0.0620, ..., -0.4238, 0.0703, -0.2070],\n",
" [-0.0908, -0.2148, 0.2676, ..., -0.4551, 0.1836, -0.4551]],\n",
" device='cuda:0', dtype=torch.bfloat16, grad_fn=<_LinearBackward>)\n"
]
}
],
"source": [
"my_linear1 = te.Linear(768, 768).bfloat16() # The first linear - we want to run it in FP4\n",
"my_linear2 = te.Linear(768, 768).bfloat16() # The second linear - we want to run it in MXFP8\n",
"\n",
"inp = inp.bfloat16()\n",
"\n",
"with te.fp8_autocast(fp8_recipe=nvfp4_recipe):\n",
" y = my_linear1(inp)\n",
" with te.fp8_autocast(fp8_recipe=mxfp8_recipe):\n",
" out = my_linear2(y)\n",
"\n",
"print(out)\n",
"\n",
"out.mean().backward()"
]
}
],
"metadata": {
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment