Unverified Commit 2afb2e0a authored by RandomGamingDev's avatar RandomGamingDev Committed by GitHub
Browse files

Added `accelerator` based gradient accumulation for basic_example (#8966)



added accelerator based gradient accumulation for basic_example
Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
parent d87fe95f
...@@ -340,7 +340,7 @@ Now you can wrap all these components together in a training loop with 🤗 Acce ...@@ -340,7 +340,7 @@ Now you can wrap all these components together in a training loop with 🤗 Acce
... loss = F.mse_loss(noise_pred, noise) ... loss = F.mse_loss(noise_pred, noise)
... accelerator.backward(loss) ... accelerator.backward(loss)
... if (step + 1) % config.gradient_accumulation_steps == 0: ... if accelerator.sync_gradients:
... accelerator.clip_grad_norm_(model.parameters(), 1.0) ... accelerator.clip_grad_norm_(model.parameters(), 1.0)
... optimizer.step() ... optimizer.step()
... lr_scheduler.step() ... lr_scheduler.step()
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment