@@ -22,7 +22,7 @@ Get started by installing 🤗 Accelerate:
pipinstallaccelerate
```
Thenimportandcreatean[`Accelerator`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator)object.`Accelerator`willautomaticallydetectyourtypeofdistributedsetupandinitializeallthenecessarycomponentsfortraining.Youdon't need to explicitly place your model on a device.
Thenimportandcreatean[`~accelerate.Accelerator`]object.The[`~accelerate.Accelerator`]willautomaticallydetectyourtypeofdistributedsetupandinitializeallthenecessarycomponentsfortraining.Youdon't need to explicitly place your model on a device.
```py
>>> from accelerate import Accelerator
...
...
@@ -32,7 +32,7 @@ Then import and create an [`Accelerator`](https://huggingface.co/docs/accelerate
## Prepare to accelerate
The next step is to pass all the relevant training objects to the [`prepare`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.prepare) method. This includes your training and evaluation DataLoaders, a model and an optimizer:
The next step is to pass all the relevant training objects to the [`~accelerate.Accelerator.prepare`] method. This includes your training and evaluation DataLoaders, a model and an optimizer:
@@ -42,7 +42,7 @@ The next step is to pass all the relevant training objects to the [`prepare`](ht
## Backward
The last addition is to replace the typical `loss.backward()` in your training loop with 🤗 Accelerate's[`backward`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.backward)method:
The last addition is to replace the typical `loss.backward()` in your training loop with 🤗 Accelerate's[`~accelerate.Accelerator.backward`]method:
```py
>>>forepochinrange(num_epochs):
...
...
@@ -121,7 +121,7 @@ accelerate launch train.py
### Train with a notebook
🤗 Accelerate can also run in a notebook if you'replanningonusingColaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to `notebook_launcher`:
🤗 Accelerate can also run in a notebook if you'replanningonusingColaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to [`~accelerate.notebook_launcher`]: