Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
4c3d98dd
Unverified
Commit
4c3d98dd
authored
Dec 03, 2020
by
Stas Bekman
Committed by
GitHub
Dec 03, 2020
Browse files
[s2s finetune_trainer] add instructions for distributed training (#8884)
parent
aa60b230
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
0 deletions
+5
-0
examples/seq2seq/README.md
examples/seq2seq/README.md
+5
-0
No files found.
examples/seq2seq/README.md
View file @
4c3d98dd
...
@@ -213,6 +213,11 @@ To see all the possible command line options, run:
...
@@ -213,6 +213,11 @@ To see all the possible command line options, run:
python finetune_trainer.py
--help
python finetune_trainer.py
--help
```
```
For multi-gpu training use
`torch.distributed.launch`
, e.g. with 2 gpus:
```
bash
python
-m
torch.distributed.launch
--nproc_per_node
=
2 finetune_trainer.py ...
```
**At the moment, `Seq2SeqTrainer` does not support *with teacher* distillation.**
**At the moment, `Seq2SeqTrainer` does not support *with teacher* distillation.**
All
`Seq2SeqTrainer`
-based fine-tuning scripts are included in the
`builtin_trainer`
directory.
All
`Seq2SeqTrainer`
-based fine-tuning scripts are included in the
`builtin_trainer`
directory.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment