Unverified Commit 0afa5071 authored by Nino Risteski's avatar Nino Risteski Committed by GitHub
Browse files

Update model_memory_anatomy.md (#25896)

typo fixes
parent a4dd53d8
...@@ -76,7 +76,7 @@ GPU memory occupied: 0 MB. ...@@ -76,7 +76,7 @@ GPU memory occupied: 0 MB.
That looks good: the GPU memory is not occupied as we would expect before we load any models. If that's not the case on That looks good: the GPU memory is not occupied as we would expect before we load any models. If that's not the case on
your machine make sure to stop all processes that are using GPU memory. However, not all free GPU memory can be used by your machine make sure to stop all processes that are using GPU memory. However, not all free GPU memory can be used by
the user. When a model is loaded to the GPU also the kernels are loaded which can take up 1-2GB of memory. To see how the user. When a model is loaded to the GPU the kernels are also loaded, which can take up 1-2GB of memory. To see how
much it is we load a tiny tensor into the GPU which triggers the kernels to be loaded as well. much it is we load a tiny tensor into the GPU which triggers the kernels to be loaded as well.
```py ```py
...@@ -105,7 +105,7 @@ how much space just the weights use. ...@@ -105,7 +105,7 @@ how much space just the weights use.
GPU memory occupied: 2631 MB. GPU memory occupied: 2631 MB.
``` ```
We can see that the model weights alone take up 1.3 GB of the GPU memory. The exact number depends on the specific We can see that the model weights alone take up 1.3 GB of GPU memory. The exact number depends on the specific
GPU you are using. Note that on newer GPUs a model can sometimes take up more space since the weights are loaded in an GPU you are using. Note that on newer GPUs a model can sometimes take up more space since the weights are loaded in an
optimized fashion that speeds up the usage of the model. Now we can also quickly check if we get the same result optimized fashion that speeds up the usage of the model. Now we can also quickly check if we get the same result
as with `nvidia-smi` CLI: as with `nvidia-smi` CLI:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment