Unverified Commit c356290c authored by Aditya Soni's avatar Aditya Soni Committed by GitHub
Browse files

typo fix as per Pytorch v1.1+

parent b0ee7c7d
......@@ -104,6 +104,6 @@ for batch in train_data:
loss = model(batch)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore (so you can use amp without issue)
scheduler.step()
optimizer.step()
scheduler.step()
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment